WorldWideScience

Sample records for based depletion methodology

  1. Development of a Reliable Fuel Depletion Methodology for the HTR-10 Spent Fuel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Kiwhan [Los Alamos National Laboratory; Beddingfield, David H. [Los Alamos National Laboratory; Geist, William H. [Los Alamos National Laboratory; Lee, Sang-Yoon [unaffiliated

    2012-07-03

    A technical working group formed in 2007 between NNSA and CAEA to develop a reliable fuel depletion method for HTR-10 based on MCNPX and to analyze the isotopic inventory and radiation source terms of the HTR-10 spent fuel. Conclusions of this presentation are: (1) Established a fuel depletion methodology and demonstrated its safeguards application; (2) Proliferation resistant at high discharge burnup ({approx}80 GWD/MtHM) - Unfavorable isotopics, high number of pebbles needed, harder to reprocess pebbles; (3) SF should remain under safeguards comparable to that of LWR; and (4) Diversion scenarios not considered, but can be performed.

  2. A SAS2H/KENO-V Methodology for 3D Full Core depletion analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.; Petrovic, B.

    2003-04-01

    This paper describes the use of a SAS2H/KENO-V methodology for 3D full core depletion analysis and illustrates its capabilities by applying it to burnup analysis of the IRIS core benchmarks. This new SAS2H/KENO-V sequence combines a 3D Monte Carlo full core calculation of node power distribution and a 1D Wigner-Seitz equivalent cell transport method for independent depletion calculation of each of the nodes. This approach reduces by more than an order of magnitude the time required for getting comparable results using the MOCUP code system. The SAS2H/KENO-V results for the asymmetric IRIS core benchmark are in good agreement with the results of the ALPHA/PHOENIX/ANC code system. (author)

  3. Adding trend data to Depletion-Based Stock Reduction Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...

  4. Study on methodology to estimate isotope generation and depletion for core design of HTGR

    International Nuclear Information System (INIS)

    Fukaya, Yuji; Ueta, Shohei; Goto, Minoru; Shimakawa, Satoshi

    2013-12-01

    An investigation on methodology to estimate isotope generation and depletion had been performed in order to improve the accuracy for HTGR core design. The technical problem for isotope generation and depletion can be divided into major three parts, for solving the burn-up equations, generating effective cross section and employing nuclide data. Especially for the generating effective cross section, the core burn-up calculation has a technological problem in common with point burn-up calculation. Thus, the investigation had also been performed for the core burn-up calculation to develop new code system in the future. As a result, it was found that the cross section with the extended 108 energy groups structure from the SRAC 107 groups structure to 20 MeV and the cross section collapse using the flux obtained by the deterministic code SRAC is proper for the use. In addition, it becomes clear the needs for the nuclear data from an investigation on the preparation condition for nuclear data for a safety analysis and a fuel design. (author)

  5. INPRO Methodology for Sustainability Assessment of Nuclear Energy Systems: Environmental Impact from Depletion of Resources

    International Nuclear Information System (INIS)

    2015-01-01

    INPRO is an international project to help ensure that nuclear energy is available to contribute in a sustainable manner to meeting the energy needs of the 21st century. A basic principle of INPRO in the area of environmental impact from depletion of resources is that a nuclear energy system will be capable of contributing to the energy needs in the 21st century while making efficient use of non-renewable resources needed for construction, operation and decommissioning. Recognizing that a national nuclear energy programme in a given country may be based both on indigenous resources and resources purchased from abroad, this publication provides background materials and summarizes the results of international global resource availability studies that could contribute to the corresponding national assessments

  6. Depletion methodology in the 3-D whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Zee, Sung Quun

    2005-02-01

    Three dimensional whole-core transport code DeCART has been developed to include a characteristics of the numerical reactor to replace partly the experiment. This code adopts the deterministic method in simulating the neutron behavior with the least assumption and approximation. This neutronic code is also coupled with the thermal hydraulic code CFD and the thermo mechanical code to simulate the combined effects. Depletion module has been implemented in DeCART code to predict the depleted composition in the fuel. The exponential matrix method of ORIGEN-2 has been used for the depletion calculation. The library of including decay constants, yield matrix and others has been used and greatly simplified for the calculation efficiency. This report summarizes the theoretical backgrounds and includes the verification of the depletion module in DeCART by performing the benchmark calculations.

  7. Evaluation of acute tryptophan depletion and sham depletion with a gelatin-based collagen peptide protein mixture

    DEFF Research Database (Denmark)

    Stenbæk, Dea Siggaard; Einarsdottir, H S; Goregliad-Fjaellingsdal, T

    2016-01-01

    Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures...

  8. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  9. Electrofishing mark-recapture and depletion methodologies evoke behavioral and physiological changes in cutthroat trout

    Science.gov (United States)

    Mesa, M. G.; Schreck, C.B.

    1989-01-01

    We examined the behavioral and physiological responses of wild and hatchery-reared cutthroat trout Oncorhynchus clarki subjected to a single electroshock, electroshock plus marking, and multiple electroshocks in natural and artificial streams. In a natural stream, cutthroat trout released after capture by electrofishing and marking showed distinct behavioral changes: fish immediately sought cover, remained relatively inactive, did not feed, and were easily approached by a diver. An average of 3–4 h was required for 50% of the fish to return to a seemingly normal mode of behavior, although responses varied widely among collection sites. Using the depletion method, we observed little change in normal behavior offish remaining in the stream section (i.e., uncaptured fish) after successive passes with electrofishing gear. In an artificial stream, hatchery-reared and wild cutthroat trout immediately decreased their rates of feeding and aggression after they were electroshocked and marked. Hatchery fish generally recovered in 2–3 h; wild fish required at least 24 h to recover. Analysis of feeding and aggression data by hierarchical rank revealed no distinct recovery trends among hatchery fish of different ranks; among wild cutthroat trout, however, socially dominant fish seemed to recover faster than intermediate and subordinate fish. Physiological indicators of stress (plasma cortisol and blood lactic acid) increased significantly in cutthroat trout subjected to electroshock plus marking or single or multiple electroshocks. As judged by the magnitude of the greatest change in cortisol and lactate, multiple electroshocks elicited the most severe stress response; however, plasma concentrations of both substances had returned to unstressed control levels by 6 h after treatment. It was evident that electrofishing and the procedures involved with estimating fish population size elicited a general stress response that was manifested not only physiologically but also

  10. Development of a micro-depletion model to us WIMS properties in history-based local-parameter calculations in RFSP

    International Nuclear Information System (INIS)

    Shen, W.

    2004-01-01

    A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)

  11. Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats

    International Nuclear Information System (INIS)

    Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki; Minowa, Yosuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2010-01-01

    Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 and 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate scoring

  12. Risk-based methodology for USNRC inspections

    International Nuclear Information System (INIS)

    Wong, S.M.; Holahan, G.M.; Chung, J.W.; Johnson, M.R.

    1995-01-01

    This paper describes the development and trial applications of a risk-based methodology to enhance the inspection processes for US nuclear power plants. Objectives of risk-based methods to complement prescriptive engineering approaches in US Nuclear Regulatory Commission (USNRC) inspection programs are presented. Insights from time-dependent risk profiles of plant configurational from Individual Plant Evaluation (IPE) studies were integrated to develop a framework for optimizing inspection efforts in NRC regulatory initiatives. Lessons learned from NRC pilot applications of the risk-based methodology for evaluation of the effectiveness of operational risk management programs at US nuclear power plant sites are also discussed

  13. Towards an MDA-based development methodology

    NARCIS (Netherlands)

    Gavras, Anastasius; Belaunde, Mariano; Ferreira Pires, Luis; Andrade Almeida, João; Oquendo, Flavio; Warboys, Brian C.; Morrison, Ron

    2004-01-01

    This paper proposes a development methodology for distributed applications based on the principles and concepts of the Model-Driven Architecture (MDA). The paper identifies phases and activities of an MDA-based development trajectory, and defines the roles and products of each activity in accordance

  14. Evaluation 2 of B10 depletion in the WH PWR

    International Nuclear Information System (INIS)

    Park, Sang Won; Woo, Hae Suk; Kim, Sun Doo; Chae, Hee Dong; Myung, Sun Yup; Jang, Ju Kyung

    2001-01-01

    This paper presents the methodology to evaluate the B 10 depletion behavior in the pressurized water reactor. And B 10 depletion evaluation is performed based on the prediction program and the measured data of B 10 . The result shows that B 10 depletion during normal operation is not negligible. Therefore, adjustments for this depletion effect should be made to calculate the estimated critical postion(ECP) and determine the boron concentration required to maintain the specified shutdown margin

  15. EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE

    Science.gov (United States)

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...

  16. Experimental Acidification Causes Soil Base-Cation Depletion at the Bear Brook Watershed in Maine

    Science.gov (United States)

    Ivan J. Fernandez; Lindsey E. Rustad; Stephen A. Norton; Jeffrey S. Kahl; Bernard J. Cosby

    2003-01-01

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to elevated N and S deposition through bimonthly additions of (NH4)2SO4. Quantitative soil...

  17. Risk-based Regulatory Evaluation Program methodology

    International Nuclear Information System (INIS)

    DuCharme, A.R.; Sanders, G.A.; Carlson, D.D.; Asselin, S.V.

    1987-01-01

    The objectives of this DOE-supported Regulatory Evaluation Progrwam are to analyze and evaluate the safety importance and economic significance of existing regulatory guidance in order to assist in the improvement of the regulatory process for current generation and future design reactors. A risk-based cost-benefit methodology was developed to evaluate the safety benefit and cost of specific regulations or Standard Review Plan sections. Risk-based methods can be used in lieu of or in combination with deterministic methods in developing regulatory requirements and reaching regulatory decisions

  18. A Goal based methodology for HAZOP analysis

    DEFF Research Database (Denmark)

    Rossing, Netta Liin; Lind, Morten; Jensen, Niels

    2010-01-01

    to nodes with simple functions such as liquid transport, gas transport, liquid storage, gas-liquid contacting etc. From the functions of the nodes the selection of relevant process variables and deviation variables follows directly. The knowledge required to perform the pre-meeting HAZOP task of dividing...... the plant along functional lines is that of chemical unit operations and transport processes plus a some familiarity with the plant a hand. Thus the preparatory work may be performed by a chemical engineer with just an introductory course in risk assessment. The goal based methodology lends itself directly...

  19. A risk-based sensor placement methodology

    International Nuclear Information System (INIS)

    Lee, Ronald W.; Kulesz, James J.

    2008-01-01

    A risk-based sensor placement methodology is proposed to solve the problem of optimal location of sensors to protect population against the exposure to, and effects of, known and/or postulated chemical, biological, and/or radiological threats. Risk is calculated as a quantitative value representing population at risk from exposure at standard exposure levels. Historical meteorological data are used to characterize weather conditions as the frequency of wind speed and direction pairs. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate risk values. Sensor locations are determined via an iterative dynamic programming algorithm whereby threats detected by sensors placed in prior iterations are removed from consideration in subsequent iterations. In addition to the risk-based placement algorithm, the proposed methodology provides a quantification of the marginal utility of each additional sensor. This is the fraction of the total risk accounted for by placement of the sensor. Thus, the criteria for halting the iterative process can be the number of sensors available, a threshold marginal utility value, and/or a minimum cumulative utility achieved with all sensors

  20. Base cation depletion and potential long-term acidification of Norwegian catchments

    International Nuclear Information System (INIS)

    Kirchner, J.W.; Lydersen, E.

    1995-01-01

    Long-term monitoring data from Norwegian catchments show that since the late 1970s, sulfate deposition and runoff sulfate concentrations have declined significantly. However, water quality has not significantly improved, because reductions in runoff sulfate have been matched by equal declines in calcium and magnesium concentrations. Long-term declines in runoff Ca and Mg are most pronounced at catchments subject to highly acidic deposition; the observed rates of decline are quantitatively consistent with depletion of exchangeable bases by accelerated leaching under high acid loading. Even though water quality has not recovered, reductions in acid deposition have been valuable because they have prevented significant acidification that would otherwise have occurred under constant acid deposition. Ongoing depletion of exchangeable bases from these catchments implies that continued deposition reductions will be needed to avoid further acidification and that recovery from acidification will be slow. 31 refs., 2 figs., 4 tabs

  1. Methodology for assessing laser-based equipment

    Science.gov (United States)

    Pelegrina-Bonilla, Gabriel; Hermsdorf, Jörg; Thombansen, Ulrich; Abels, Peter; Kaierle, Stefan; Neumann, Jörg

    2017-10-01

    Methodologies for the assessment of technology's maturity are widely used in industry and research. Probably the best known are technology readiness levels (TRLs), initially pioneered by the National Aeronautics and Space Administration (NASA). At the beginning, only descriptively defined TRLs existed, but over time, automated assessment techniques in the form of questionnaires emerged in order to determine TRLs. Originally TRLs targeted equipment for space applications, but the demands on industrial relevant equipment are partly different in terms of, for example, overall costs, product quantities, or the presence of competitors. Therefore, we present a commonly valid assessment methodology with the aim of assessing laser-based equipment for industrial use, in general. The assessment is carried out with the help of a questionnaire, which allows for a user-friendly and easy accessible way to monitor the progress from the lab-proven state to the application-ready product throughout the complete development period. The assessment result is presented in a multidimensional metric in order to reveal the current specific strengths and weaknesses of the equipment development process, which can be used to direct the remaining development process of the equipment in the right direction.

  2. Depleted uranium

    International Nuclear Information System (INIS)

    Huffer, E.; Nifenecker, H.

    2001-02-01

    This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)

  3. New methodology for Ozone Depletion Potentials of short-lived compounds: n-Propyl bromide as an example

    Science.gov (United States)

    Wuebbles, Donald J.; Patten, Kenneth O.; Johnson, Matthew T.; Kotamarthi, Rao

    2001-07-01

    A number of the compounds proposed as replacements for substances controlled under the Montreal Protocol have extremely short atmospheric lifetimes, on the order of days to a few months. An important example is n-propyl bromide (also referred to as 1-bromopropane, CH2BrCH2CH3 or simplified as 1-C3H7Br or nPB). This compound, useful as a solvent, has an atmospheric lifetime of less than 20 days due to its reaction with hydroxyl. Because nPB contains bromine, any amount reaching the stratosphere has the potential to affect concentrations of stratospheric ozone. The definition of Ozone Depletion Potentials (ODP) needs to be modified for such short-lived compounds to account for the location and timing of emissions. It is not adequate to treat these chemicals as if they were uniformly emitted at all latitudes and longitudes as normally done for longer-lived gases. Thus, for short-lived compounds, policymakers will need a table of ODP values instead of the single value generally provided in past studies. This study uses the MOZART2 three-dimensional chemical-transport model in combination with studies with our less computationally expensive two-dimensional model to examine potential effects of nPB on stratospheric ozone. Multiple facets of this study examine key questions regarding the amount of bromine reaching the stratosphere following emission of nPB. Our most significant findings from this study for the purposes of short-lived replacement compound ozone effects are summarized as follows. The degradation of nPB produces a significant quantity of bromoacetone which increases the amount of bromine transported to the stratosphere due to nPB. However, much of that effect is not due to bromoacetone itself, but instead to inorganic bromine which is produced from tropospheric oxidation of nPB, bromoacetone, and other degradation products and is transported above the dry and wet deposition processes of the model. The MOZART2 nPB results indicate a minimal correction of the

  4. Soil nutrients, aboveground productivity and vegetative diversity after 10 years of experimental acidification and base cation depletion

    Science.gov (United States)

    Mary Beth Adams; James A. Burger

    2010-01-01

    Soil acidification and base cation depletion are concerns for those wishing to manage central Appalachian hardwood forests sustainably. In this research, 2 experiments were established in 1996 and 1997 in two forest types common in the central Appalachian hardwood forests, to examine how these important forests respond to depletion of nutrients such as calcium and...

  5. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    International Nuclear Information System (INIS)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables

  6. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  7. SNE's methodological basis - web-based software in entrepreneurial surveys

    DEFF Research Database (Denmark)

    Madsen, Henning

    This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project.......This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project....

  8. Methodology for reliability based condition assessment

    International Nuclear Information System (INIS)

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period

  9. Value and Vision-based Methodology in Integrated Design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    on empirical data from workshop where the Value and Vision-based methodology has been taught. The research approach chosen for this investigation is Action Research, where the researcher plays an active role in generating the data and gains a deeper understanding of the investigated phenomena. The result...... of this thesis is the value transformation from an explicit set of values to a product concept using a vision based concept development methodology based on the Pyramid Model (Lerdahl, 2001) in a design team context. The aim of this thesis is to examine how the process of value transformation is occurring within...... is divided in three; the systemic unfolding of the Value and Vision-based methodology, the structured presentation of practical implementation of the methodology and finally the analysis and conclusion regarding the value transformation, phenomena and learning aspects of the methodology....

  10. A component-based groupware development methodology

    NARCIS (Netherlands)

    Guareis de farias, Cléver; Ferreira Pires, Luis; van Sinderen, Marten J.

    2000-01-01

    Software development in general and groupware applications in particular can greatly benefit from the reusability and interoperability aspects associated with software components. Component-based software development enables the construction of software artefacts by assembling prefabricated,

  11. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  12. Methodological approaches based on business rules

    Directory of Open Access Journals (Sweden)

    Anca Ioana ANDREESCU

    2008-01-01

    Full Text Available Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize the rules and move them outside the application. This paper analyzes and evaluates three well-known business rules approaches. It also outlines some critical factors that have to be taken into account in the decision to introduce business rules facilities in a software system. Based on the concept of explicit manipulation of business rules in a software system, the need for a general approach based on business rules is discussed.

  13. Multiple Methodologies: Using Community-Based Participatory Research and Decolonizing Methodologies in Kenya

    Science.gov (United States)

    Elder, Brent C.; Odoyo, Kenneth O.

    2018-01-01

    In this project, we examined the development of a sustainable inclusive education system in western Kenya by combining community-based participatory research (CBPR) and decolonizing methodologies. Through three cycles of qualitative interviews with stakeholders in inclusive education, participants explained what they saw as foundational components…

  14. PENBURN - A 3-D Zone-Based Depletion/Burnup Solver

    International Nuclear Information System (INIS)

    Manalo, Kevin; Plower, Thomas; Rowe, Mireille; Mock, Travis; Sjoden, Glenn E.

    2008-01-01

    PENBURN (Parallel Environment Burnup) is a general depletion/burnup solver which, when provided with zone-based reaction rates, computes time-dependent isotope concentrations for a set of actinides and fission products. Burnup analysis in PENBURN is performed with a direct Bateman-solver chain solution technique. Specifically, in tandem with PENBURN is the use of PENTRAN, a parallel multi-group anisotropic Sn code for 3-D Cartesian geometries. In PENBURN, the linear chain method is actively used to solve individual isotope chains which are then fully attributed by the burnup code to yield integrated isotope concentrations for each nuclide specified. Included with the discussion of code features, a single PWR fuel pin calculation with the burnup code is performed and detailed with a benchmark comparison to PIE (Post-Irradiation Examination) data within the SFCOMPO (Spent Fuel Composition / NEA) database, and also with burnup codes in SCALE5.1. Conclusions within the paper detail, in PENBURN, the accuracy of major actinides, flux profile behavior as a function of burnup, and criticality calculations for the PWR fuel pin model. (authors)

  15. An ontological case base engineering methodology for diabetes management.

    Science.gov (United States)

    El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema

    2014-08-01

    Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.

  16. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    Science.gov (United States)

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  17. An Innovative Fuzzy-Logic-Based Methodology for Trend Identification

    International Nuclear Information System (INIS)

    Wang Xin; Tsoukalas, Lefteri H.; Wei, Thomas Y.C.; Reifman, Jaques

    2001-01-01

    A new fuzzy-logic-based methodology for on-line signal trend identification is introduced. The methodology may be used for detecting the onset of nuclear power plant (NPP) transients at the earliest possible time and could be of great benefit to diagnostic, maintenance, and performance-monitoring programs. Although signal trend identification is complicated by the presence of noise, fuzzy methods can help capture important features of on-line signals, integrate the information included in these features, and classify incoming NPP signals into increasing, decreasing, and steady-state trend categories. A computer program named PROTREN is developed and tested for the purpose of verifying this methodology using NPP and simulation data. The results indicate that the new fuzzy-logic-based methodology is capable of detecting transients accurately, it identifies trends reliably and does not misinterpret a steady-state signal as a transient one

  18. Methodological Approaches for Estimating Gross Regional Product after Taking into Account Depletion of Natural Resources, Environmental Pollution and Human Capital Aspects

    Directory of Open Access Journals (Sweden)

    Boris Alengordovich Korobitsyn

    2015-09-01

    Full Text Available A key indicator of the System of National Accounts of Russia at a regional scale is Gross Regional Product characterizing the value of goods and services produced in all sectors of the economy in a country and intended for final consumption, capital formation and net exports (excluding imports. From a sustainability perspective, the most weakness of GRP is that it ignores depreciation of man-made assets, natural resource depletion, environmental pollution and degradation, and potential social costs such as poorer health due to exposure to occupational hazards. Several types of alternative approaches to measuring socio-economic progress are considering for six administrative units of the Ural Federal District for the period 2006–2014. Proposed alternatives to GRP as a measure of social progress are focused on natural resource depletion, environmental externalities and some human development aspects. The most promising is the use of corrected macroeconomic indicators similar to the “genuine savings” compiled by the World Bank. Genuine savings are defined in this paper as net savings (net gross savings minus consumption of fixed capital minus the consumption of natural non-renewable resources and the monetary evaluations of damages resulting from air pollution, water pollution and waste disposal. Two main groups of non renewable resources are considered: energy resources (uranium ore, oil and natural gas and mineral resources (iron ore, copper, and aluminum. In spite of various shortcomings, this indicator represents a considerable improvement over GRP information. For example, while GRP demonstrates steady growth between 2006 and 2014 for the main Russian oil- and gas-producing regions — Hanty-Mansi and Yamalo-Nenets Autonomous Okrugs, genuine savings for these regions decreased over all period. It means that their resource-based economy could not be considered as being on a sustainable path even in the framework of

  19. Improvements in EBR-2 core depletion calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Hill, R.N.; Sakamoto, S.

    1991-01-01

    The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs

  20. Comparative analysis of in vivo T cell depletion with radiotherapy, combination chemotherapy, and the monoclonal antibody Campath-1G, using limiting dilution methodology

    International Nuclear Information System (INIS)

    Theobald, M.; Hoffmann, T.; Bunjes, D.; Heit, W.

    1990-01-01

    We have investigated the efficacy of standard conditioning regimens for bone marrow transplantation in depleting functional T lymphocytes in vivo and have compared it with the efficacy of the monoclonal antibody Campath-1G. Using limiting dilution techniques the frequencies of proliferating T cell precursors (PTL), cytotoxic T cell precursors (CTL-p), helper T cell precursors (HTL-p), and mature helper T cells (HTL) were determined before and after treatment. Both total body irradiation and combination chemotherapy with busulfan/cyclophosphamide were highly efficient at depleting PTL, CTL-p, and HTL-p (0-4 days) but spared HTL to a variable extent (0-99.5%). In the majority of patients treated with Campath-1G a similar degree of PTL, CTL-p, and HTL-p depletion was achieved, and, in addition, HTL were effectively removed (greater than 95.5%). These results suggest that Campath-1G could be successfully employed in depleting radio- and chemotherapy-resistant host T lymphocytes prior to T-depleted bone marrow transplantation

  1. A Java-Web-Based-Learning Methodology, Case Study ...

    African Journals Online (AJOL)

    A Java-Web-Based-Learning Methodology, Case Study : Waterborne diseases. The recent advances in web technologies have opened new opportunities for computer-based-education. One can learn independently of time and place constraints, and have instantaneous access to relevant updated material at minimal cost.

  2. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  3. Towards an MDA-based development methodology for distributed applications

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Gavras, A.; Belaunde, M.; Ferreira Pires, Luis; Andrade Almeida, João

    2004-01-01

    This paper proposes a development methodology for distributed applications based on the principles and concepts of the Model-Driven Architecture (MDA). The paper identifies phases and activities of an MDA-based development trajectory, and defines the roles and products of each activity in accordance

  4. Phenomena based Methodology for Process Synthesis incorporating Process Intensification

    DEFF Research Database (Denmark)

    Lutze, Philip; Babi, Deenesh Kavi; Woodley, John

    2013-01-01

    at processes at the lowest level of aggregation which is the phenomena level. In this paper, a phenomena based synthesis/design methodology incorporating process intensification is presented. Using this methodology, a systematic identification of necessary and desirable (integrated) phenomena as well......Process intensification (PI) has the potential to improve existing as well as conceptual processes, in order to achieve a more sustainable production. PI can be achieved at different levels. That is, the unit operations, functional and/or phenomena level. The highest impact is expected by looking...... as generation and screening of phenomena based flowsheet options are presented using a decomposition based solution approach. The developed methodology as well as necessary tools and supporting methods are highlighted through a case study involving the production of isopropyl-acetate....

  5. A Model-Based Prognostics Methodology For Electrolytic Capacitors Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  6. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for elec- trolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  7. Radiation protection optimization using a knowledge based methodology

    International Nuclear Information System (INIS)

    Reyes-Jimenez, J.; Tsoukalas, L.H.

    1991-01-01

    This paper presents a knowledge based methodology for radiological planning and radiation protection optimization. The cost-benefit methodology described on International Commission of Radiation Protection Report No. 37 is employed within a knowledge based framework for the purpose of optimizing radiation protection and plan maintenance activities while optimizing radiation protection. 1, 2 The methodology is demonstrated through an application to a heating ventilation and air conditioning (HVAC) system. HVAC is used to reduce radioactivity concentration levels in selected contaminated multi-compartment models at nuclear power plants when higher than normal radiation levels are detected. The overall objective is to reduce personnel exposure resulting from airborne radioactivity, when routine or maintenance access is required in contaminated areas. 2 figs, 15 refs

  8. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  9. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  10. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  11. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  12. Modernising educational programmes in ICT based on the Tuning methodology

    Directory of Open Access Journals (Sweden)

    Alexander Bedny

    2014-07-01

    Full Text Available An analysis is presented of the experience of modernising undergraduate educational programs using the TUNING methodology, based on the example of the area of studies “Fundamental computer science and information technology” (FCSIT implemented at Lobachevsky State University of Nizhni Novgorod (Russia. The algorithm for reforming curricula for the subject area of information technology in accordance with the TUNING methodology is explained. A comparison is drawn between the existing Russian and European standards in the area of ICT education, including the European e-Competence Framework, with the focus on relevant competences. Some guidelines for the preparation of educational programmes are also provided.

  13. A strategy for selective detection based on interferent depleting and redox cycling using the plane-recessed microdisk array electrodes

    International Nuclear Information System (INIS)

    Zhu Feng; Yan Jiawei; Lu Miao; Zhou Yongliang; Yang Yang; Mao Bingwei

    2011-01-01

    Highlights: → A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. → The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. → The electrodes enhance the current signal by redox cycling. → The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The

  14. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  15. Guidelines for reporting evaluations based on observational methodology.

    Science.gov (United States)

    Portell, Mariona; Anguera, M Teresa; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana

    2015-01-01

    Observational methodology is one of the most suitable research designs for evaluating fidelity of implementation, especially in complex interventions. However, the conduct and reporting of observational studies is hampered by the absence of specific guidelines, such as those that exist for other evaluation designs. This lack of specific guidance poses a threat to the quality and transparency of these studies and also constitutes a considerable publication hurdle. The aim of this study thus was to draw up a set of proposed guidelines for reporting evaluations based on observational methodology. The guidelines were developed by triangulating three sources of information: observational studies performed in different fields by experts in observational methodology, reporting guidelines for general studies and studies with similar designs to observational studies, and proposals from experts in observational methodology at scientific meetings. We produced a list of guidelines grouped into three domains: intervention and expected outcomes, methods, and results. The result is a useful, carefully crafted set of simple guidelines for conducting and reporting observational studies in the field of program evaluation.

  16. A methodology for strain-based fatigue reliability analysis

    International Nuclear Information System (INIS)

    Zhao, Y.X.

    2000-01-01

    A significant scatter of the cyclic stress-strain (CSS) responses should be noted for a nuclear reactor material, 1Cr18Ni9Ti pipe-weld metal. Existence of the scatter implies that a random cyclic strain applied history will be introduced under any of the loading modes even a deterministic loading history. A non-conservative evaluation might be given in the practice without considering the scatter. A methodology for strain-based fatigue reliability analysis, which has taken into account the scatter, is developed. The responses are approximately modeled by probability-based CSS curves of Ramberg-Osgood relation. The strain-life data are modeled, similarly, by probability-based strain-life curves of Coffin-Manson law. The reliability assessment is constructed by considering interference of the random fatigue strain applied and capacity histories. Probability density functions of the applied and capacity histories are analytically given. The methodology could be conveniently extrapolated to the case of deterministic CSS relation as the existent methods did. Non-conservative evaluation of the deterministic CSS relation and availability of present methodology have been indicated by an analysis of the material test results

  17. Regulatory considerations and quality assurance of depleted uranium based radiography cameras

    International Nuclear Information System (INIS)

    Sapkal, Jyotsna A.; Yadav, R.K.B.; Amrota, C.T.; Singh, Pratap; GopaIakrishanan, R.H.; Patil, B.N.; Mane, Nilesh

    2016-01-01

    Radiography cameras with shielding material as Depleted Uranium (DU) are used for containment of Iridium ( 192 Ir) source. DU shielding surrounds the titanium made 'S' tube through which the encapsulated 192 Ir source along with the pigtail travels. As per guidelines, it is required to check periodically the shielding integrity of DU shielding periodically by monitoring for alpha transferable contamination inside the 'S' tube. This paper describes in brief the method followed for collection of samples from inside the 'S' tube . The samples were analysed for transferable contamination due to gross alpha using alpha scintillation (ALSCIN) counter. The gross alpha contamination in the 'S' tube was found to be less than the recommended USNRC value for discarding the radiography camera. IAEA recommendations related to transferable contamination and AERB guidelines on the quality assurance (QA) requirements of radiography camera were studied

  18. NOMAD: a nodal microscopic analysis method for nuclear fuel depletion

    International Nuclear Information System (INIS)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Recently developed assembly homogenization techniques made possible very efficient global burnup calculations based on modern nodal methods. There are two possible ways of modeling the global depletion process: macroscopic and microscopic depletion models. Using a microscopic global depletion approach NOMAD (NOdal Microscopic Analysis Method for Nuclear Fuel Depletion), a multigroup, two- and three-dimensional, multicycle depletion code was devised. The code uses the ILLICO nodal diffusion model. The formalism of the ILLICO methodology is extended to treat changes in the macroscopic cross sections during a depletion cycle without recomputing the coupling coefficients. This results in a computationally very efficient method. The code was tested against a well-known depletion benchmark problem. In this problem a two-dimensional pressurized water reactor is depleted through two cycles. Both cycles were run with 1 x 1 and 2 x 2 nodes per assembly. It is obvious that the one node per assembly solution gives unacceptable results while the 2 x 2 solution gives relative power errors consistently below 2%

  19. Structural health monitoring methodology for aircraft condition-based maintenance

    Science.gov (United States)

    Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre

    2001-06-01

    Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.

  20. A Network Based Methodology to Reveal Patterns in Knowledge Transfer

    Directory of Open Access Journals (Sweden)

    Orlando López-Cruz

    2015-12-01

    Full Text Available This paper motivates, presents and demonstrates in use a methodology based in complex network analysis to support research aimed at identification of sources in the process of knowledge transfer at the interorganizational level. The importance of this methodology is that it states a unified model to reveal knowledge sharing patterns and to compare results from multiple researches on data from different periods of time and different sectors of the economy. This methodology does not address the underlying statistical processes. To do this, national statistics departments (NSD provide documents and tools at their websites. But this proposal provides a guide to model information inferences gathered from data processing revealing links between sources and recipients of knowledge being transferred and that the recipient detects as main source to new knowledge creation. Some national statistics departments set as objective for these surveys the characterization of innovation dynamics in firms and to analyze the use of public support instruments. From this characterization scholars conduct different researches. Measures of dimensions of the network composed by manufacturing firms and other organizations conform the base to inquiry the structure that emerges from taking ideas from other organizations to incept innovations. These two sets of data are actors of a two- mode-network. The link between two actors (network nodes, one acting as the source of the idea. The second one acting as the destination comes from organizations or events organized by organizations that “provide” ideas to other group of firms. The resulting demonstrated design satisfies the objective of being a methodological model to identify sources in knowledge transfer of knowledge effectively used in innovation.

  1. Base cation depletion, eutrophication and acidification of species-rich grasslands in response to long-term simulated nitrogen deposition

    Energy Technology Data Exchange (ETDEWEB)

    Horswill, Paul [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)], E-mail: paul.horswill@naturalengland.org.uk; O' Sullivan, Odhran; Phoenix, Gareth K.; Lee, John A.; Leake, Jonathan R. [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)

    2008-09-15

    Pollutant nitrogen deposition effects on soil and foliar element concentrations were investigated in acidic and limestone grasslands, located in one of the most nitrogen and acid rain polluted regions of the UK, using plots treated for 8-10 years with 35-140 kg N ha{sup -2} y{sup -1} as NH{sub 4}NO{sub 3}. Historic data suggests both grasslands have acidified over the past 50 years. Nitrogen deposition treatments caused the grassland soils to lose 23-35% of their total available bases (Ca, Mg, K, and Na) and they became acidified by 0.2-0.4 pH units. Aluminium, iron and manganese were mobilised and taken up by limestone grassland forbs and were translocated down the acid grassland soil. Mineral nitrogen availability increased in both grasslands and many species showed foliar N enrichment. This study provides the first definitive evidence that nitrogen deposition depletes base cations from grassland soils. The resulting acidification, metal mobilisation and eutrophication are implicated in driving floristic changes. - Nitrogen deposition causes base cation depletion, acidification and eutrophication of semi-natural grassland soils.

  2. OBEST: The Object-Based Event Scenario Tree Methodology

    International Nuclear Information System (INIS)

    WYSS, GREGORY D.; DURAN, FELICIA A.

    2001-01-01

    Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies

  3. MS-based analytical methodologies to characterize genetically modified crops.

    Science.gov (United States)

    García-Cañas, Virginia; Simó, Carolina; León, Carlos; Ibáñez, Elena; Cifuentes, Alejandro

    2011-01-01

    The development of genetically modified crops has had a great impact on the agriculture and food industries. However, the development of any genetically modified organism (GMO) requires the application of analytical procedures to confirm the equivalence of the GMO compared to its isogenic non-transgenic counterpart. Moreover, the use of GMOs in foods and agriculture faces numerous criticisms from consumers and ecological organizations that have led some countries to regulate their production, growth, and commercialization. These regulations have brought about the need of new and more powerful analytical methods to face the complexity of this topic. In this regard, MS-based technologies are increasingly used for GMOs analysis to provide very useful information on GMO composition (e.g., metabolites, proteins). This review focuses on the MS-based analytical methodologies used to characterize genetically modified crops (also called transgenic crops). First, an overview on genetically modified crops development is provided, together with the main difficulties of their analysis. Next, the different MS-based analytical approaches applied to characterize GM crops are critically discussed, and include "-omics" approaches and target-based approaches. These methodologies allow the study of intended and unintended effects that result from the genetic transformation. This information is considered to be essential to corroborate (or not) the equivalence of the GM crop with its isogenic non-transgenic counterpart. Copyright © 2010 Wiley Periodicals, Inc.

  4. Evaluation of analytical performance based on partial order methodology.

    Science.gov (United States)

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Development of a statistically based access delay timeline methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  6. Memristive device based on a depletion-type SONOS field effect transistor

    Science.gov (United States)

    Himmel, N.; Ziegler, M.; Mähne, H.; Thiem, S.; Winterfeld, H.; Kohlstedt, H.

    2017-06-01

    State-of-the-art SONOS (silicon-oxide-nitride-oxide-polysilicon) field effect transistors were operated in a memristive switching mode. The circuit design is a variation of the MemFlash concept and the particular properties of depletion type SONOS-transistors were taken into account. The transistor was externally wired with a resistively shunted pn-diode. Experimental current-voltage curves show analog bipolar switching characteristics within a bias voltage range of ±10 V, exhibiting a pronounced asymmetric hysteresis loop. The experimental data are confirmed by SPICE simulations. The underlying memristive mechanism is purely electronic, which eliminates an initial forming step of the as-fabricated cells. This fact, together with reasonable design flexibility, in particular to adjust the maximum R ON/R OFF ratio, makes these cells attractive for neuromorphic applications. The relative large set and reset voltage around ±10 V might be decreased by using thinner gate-oxides. The all-electric operation principle, in combination with an established silicon manufacturing process of SONOS devices at the Semiconductor Foundry X-FAB, promise reliable operation, low parameter spread and high integration density.

  7. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Science.gov (United States)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  8. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    International Nuclear Information System (INIS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M.; Folini, D.; Popov, M. V.; Walder, R.

    2017-01-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  9. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M. [Astrophysics Group, University of Exeter, Exeter EX4 4QL (United Kingdom); Folini, D.; Popov, M. V.; Walder, R., E-mail: i.baraffe@ex.ac.uk [Ecole Normale Supérieure de Lyon, CRAL, UMR CNRS 5574, F-69364 Lyon Cedex 07 (France)

    2017-08-10

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  10. Investment Strategies Optimization based on a SAX-GA Methodology

    CERN Document Server

    Canelas, António M L; Horta, Nuno C G

    2013-01-01

    This book presents a new computational finance approach combining a Symbolic Aggregate approXimation (SAX) technique with an optimization kernel based on genetic algorithms (GA). While the SAX representation is used to describe the financial time series, the evolutionary optimization kernel is used in order to identify the most relevant patterns and generate investment rules. The proposed approach considers several different chromosomes structures in order to achieve better results on the trading platform The methodology presented in this book has great potential on investment markets.

  11. Model identification methodology for fluid-based inerters

    Science.gov (United States)

    Liu, Xiaofu; Jiang, Jason Zheng; Titurus, Branislav; Harrison, Andrew

    2018-06-01

    Inerter is the mechanical dual of the capacitor via the force-current analogy. It has the property that the force across the terminals is proportional to their relative acceleration. Compared with flywheel-based inerters, fluid-based forms have advantages of improved durability, inherent damping and simplicity of design. In order to improve the understanding of the physical behaviour of this fluid-based device, especially caused by the hydraulic resistance and inertial effects in the external tube, this work proposes a comprehensive model identification methodology. Firstly, a modelling procedure is established, which allows the topological arrangement of the mechanical networks to be obtained by mapping the damping, inertance and stiffness effects directly to their respective hydraulic counterparts. Secondly, an experimental sequence is followed, which separates the identification of friction, stiffness and various damping effects. Furthermore, an experimental set-up is introduced, where two pressure gauges are used to accurately measure the pressure drop across the external tube. The theoretical models with improved confidence are obtained using the proposed methodology for a helical-tube fluid inerter prototype. The sources of remaining discrepancies are further analysed.

  12. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    International Nuclear Information System (INIS)

    Richard, Joshua; Galloway, Jack; Fensin, Michael; Trellue, Holly

    2015-01-01

    Highlights: • A modular mapping methodogy for neutronic-thermal hydraulic nuclear reactor multiphysics, SMITHERS, has been developed. • Written in Python, SMITHERS takes a novel object-oriented approach for facilitating data transitions between solvers. This approach enables near-instant compatibility with existing MCNP/MONTEBURNS input decks. • It also allows for coupling with thermal-hydraulic solvers of various levels of fidelity. • Two BWR and PWR test problems are presented for verifying correct functionality of the SMITHERS code routines. - Abstract: A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. Additionally, it performs the basis mapping from the combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers. The mapping methodology was specifically developed to be flexible enough such that it could successfully integrate preexisting depletion solver case files with different thermal-hydraulic solvers. This approach allows the user to tailor the selection of a

  13. An evolving systems-based methodology for healthcare planning.

    Science.gov (United States)

    Warwick, Jon; Bell, Gary

    2007-01-01

    Healthcare planning seems beset with problems at all hierarchical levels. These are caused by the 'soft' nature of many of the issues present in healthcare planning and the high levels of complexity inherent in healthcare services. There has, in recent years, been a move to utilize systems thinking ideas in an effort to gain a better understanding of the forces at work within the healthcare environment and these have had some success. This paper argues that systems-based methodologies can be further enhanced by metrication and modeling which assist in exploring the changed emergent behavior of a system resulting from management intervention. The paper describes the Holon Framework as an evolving systems-based approach that has been used to help clients understand complex systems (in the education domain) that would have application in the analysis of healthcare problems.

  14. Methodology for cloud-based design of robots

    Science.gov (United States)

    Ogorodnikova, O. M.; Vaganov, K. A.; Putimtsev, I. D.

    2017-09-01

    This paper presents some important results for cloud-based designing a robot arm by a group of students. Methodology for the cloud-based design was developed and used to initiate interdisciplinary project about research and development of a specific manipulator. The whole project data files were hosted by Ural Federal University data center. The 3D (three-dimensional) model of the robot arm was created using Siemens PLM software (Product Lifecycle Management) and structured as a complex mechatronics product by means of Siemens Teamcenter thin client; all processes were performed in the clouds. The robot arm was designed in purpose to load blanks up to 1 kg into the work space of the milling machine for performing student's researches.

  15. SPRINT RA 230: Methodology for knowledge based developments

    International Nuclear Information System (INIS)

    Wallsgrove, R.; Munro, F.

    1991-01-01

    SPRINT RA 230: A Methodology for Knowledge Based Developments, funded by the European Commission, was set up to investigate the use of KBS in the engineering industry. Its aim was to find out low KBS were currently used and what people's conceptions of them was, to disseminate current knowledge and to recommend further research into this area. A survey (by post and face to face interviews) was carried out under SPRINT RA 230 to investigate requirements for more intelligent software. In the survey we looked both at how people think about Knowledge Based Systems (KBS), what they find useful and what is not useful, and what current expertise problems or limitations of conventional software might suggest KBS solutions. (orig./DG)

  16. Methodology of Maqamat Hamadani and Hariri Based on Buseman’s statistical methodology

    Directory of Open Access Journals (Sweden)

    Hamed Sedghi

    2016-02-01

    Full Text Available Abstract  Stylistics can be defined as analysis and interpretation of the expression and different forms of speech, based on linguistic elements. German theorist , Buseman , devised his theses in statistical style based on the ratio of verbs to adjectives, generalizing a variety of literary and non- literary genera in German Language and Literature .   According to Buseman , increasing the number of verbs in verses, makes the text closer to literary style. Therefore, increasing the number of adjectives, makes the text closer to scientific or subjective style.   The most important achievement of statistical methodology of Buseman can be considered as: a comparison of author’s style, literary periods and genres; b studying the language system and variety of words; and, c classification and grading differences or similarities of literary, periods and genres.   The purpose of this study: Stylistic analysis of Maqamat Hamadani and al-Hariri based on the statistical model of Buseman.   The question proposed in this study : a How effective is the use of the statistical methods , in identifying and analyzing a variety of literature including Maqamat? b How effective is the quantitative scope of verbs and adjectives , as two key parameters in determining the style of literary works? And c Which element of fiction is most effective in enriching the literary approach ?   Specific research method is statistical – analysis ; We initially discovered and classified the number of verbs and adjectives in the Fifty- one of Hamadani Maqamehs and Fifty of Hariri Maqamehs ; then the scope of verbs and adjectives usage is shown in the form of tables and graphs . After that , we will assess the style of the literary work , based on the use of verbs and adjectives.   The research findings show : All Hamadani and Hariri Maqamat s quantitatively benefit from high -active approach, for the use of the verb . At 46 Maqameh Hamadani , and 48 Maqameh Hariri the

  17. Electroencephalogram-based methodology for determining unconsciousness during depopulation.

    Science.gov (United States)

    Benson, E R; Alphin, R L; Rankin, M K; Caputo, M P; Johnson, A L

    2012-12-01

    When an avian influenza or virulent Newcastle disease outbreak occurs within commercial poultry, key steps involved in managing a fast-moving poultry disease can include: education; biosecurity; diagnostics and surveillance; quarantine; elimination of infected poultry through depopulation or culling, disposal, and disinfection; and decreasing host susceptibility. Available mass emergency depopulation procedures include whole-house gassing, partial-house gassing, containerized gassing, and water-based foam. To evaluate potential depopulation methods, it is often necessary to determine the time to the loss of consciousness (LOC) in poultry. Many current approaches to evaluating LOC are qualitative and require visual observation of the birds. This study outlines an electroencephalogram (EEG) frequency domain-based approach for determining the point at which a bird loses consciousness. In this study, commercial broilers were used to develop the methodology, and the methodology was validated with layer hens. In total, 42 data sets from 13 broilers aged 5-10 wk and 12 data sets from four spent hens (age greater than 1 yr) were collected and analyzed. A wireless EEG transmitter was surgically implanted, and each bird was monitored during individual treatment with isoflurane anesthesia. EEG data were evaluated using a frequency-based approach. The alpha/delta (A/D, alpha: 8-12 Hz, delta: 0.5-4 Hz) ratio and loss of posture (LOP) were used to determine the point at which the birds became unconscious. Unconsciousness, regardless of the method of induction, causes suppression in alpha and a rise in the delta frequency component, and this change is used to determine unconsciousness. There was no statistically significant difference between time to unconsciousness as measured by A/D ratio or LOP, and the A/D values were correlated at the times of unconsciousness. The correlation between LOP and A/D ratio indicates that the methodology is appropriate for determining

  18. Nuclear insurance risk assessment using risk-based methodology

    International Nuclear Information System (INIS)

    Wendland, W.G.

    1992-01-01

    This paper presents American Nuclear Insurers' (ANI's) and Mutual Atomic Energy Liability Underwriters' (MAELU's) process and experience for conducting nuclear insurance risk assessments using a risk-based methodology. The process is primarily qualitative and uses traditional insurance risk assessment methods and an approach developed under the auspices of the American Society of Mechanical Engineers (ASME) in which ANI/MAELU is an active sponsor. This process assists ANI's technical resources in identifying where to look for insurance risk in an industry in which insurance exposure tends to be dynamic and nonactuarial. The process is an evolving one that also seeks to minimize the impact on insureds while maintaining a mutually agreeable risk tolerance

  19. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Directory of Open Access Journals (Sweden)

    Yaacob Sazali

    2005-01-01

    Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  20. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Science.gov (United States)

    Nagarajan, R.; Sainarayanan, G.; Yaacob, Sazali; Porle, Rosalyn R.

    2005-12-01

    We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI) system. The NAVI has a single board processing system (SBPS), a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  1. Universal Verification Methodology Based Register Test Automation Flow.

    Science.gov (United States)

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  2. METHODOLOGICAL BASES OF PUBLIC ADMINISTRATION OF PUBLIC DEVELOPMENT IN UKRAINE

    Directory of Open Access Journals (Sweden)

    Kyrylo Ohdanskyi

    2016-11-01

    Full Text Available An author in the article examines theoretical bases in the question of dynamics of community development. According to classic canons a dynamic process on any of levels of management hierarchy can be presented as a complex of changes of its ecological, economic and social components. For today, national politics in the field of realization of conception of community development does not take into account most theoretical works, which testify that in our country the mechanism of its effective adjusting is not yet created. In connection to this the author of the article accents the attention on the necessity of the use in modern Ukraine realities of the effective approaches to government control of community development. As the subject of research of the article the author chose the analysis of process of community development and methodological bases for the choice of variants for a management by this process. System approach is chosen by author as a research methodology. The aim. Analysis of theoretical bases and developing of the new approaches to the government administration of community development. An author divides the process of community development by constituents: social, economic and ecological components. From the indicated warning it is necessary to take into account the objective necessity of developing of the new conceptual approaches to the elaboration of tool of adjusting of community development. For the decision of this task the author of the article it is suggested to use the category “dynamics”. An author in the article does the analysis of different interpretations of term “dynamics”and offers his own interpretation in the context of community development. Our researches confirm that, mainly, it is methodologically possible to form the blocks of quantitative and quality factors of specific different information of ecological, economic and social character. Author’s researches confirm that it is methodologically

  3. Methodology of Maqamat Hamadani and Hariri Based on Buseman’s statistical methodology

    Directory of Open Access Journals (Sweden)

    Hamed Sedghi

    2016-01-01

    Full Text Available  Abstract  Stylistics can be defined as analysis and interpretation of the expression and different forms of speech, based on linguistic elements. German theorist , Buseman , devised his theses in statistical style based on the ratio of verbs to adjectives, generalizing a variety of literary and non- literary genera in German Language and Literature . Â  According to Buseman , increasing the number of verbs in verses, makes the text closer to literary style. Therefore, increasing the number of adjectives, makes the text closer to scientific or subjective style.   The most important achievement of statistical methodology of Buseman can be considered as: a comparison of author’s style, literary periods and genres; b studying the language system and variety of words; and, c classification and grading differences or similarities of literary, periods and genres.   The purpose of this study: Stylistic analysis of Maqamat Hamadani and al-Hariri based on the statistical model of Buseman.   The question proposed in this study : a How effective is the use of the statistical methods , in identifying and analyzing a variety of literature including Maqamat? b How effective is the quantitative scope of verbs and adjectives , as two key parameters in determining the style of literary works? And c Which element of fiction is most effective in enriching the literary approach ?   Specific research method is statistical – analysis ; We initially discovered and classified the number of verbs and adjectives in the Fifty- one of Hamadani Maqamehs and Fifty of Hariri Maqamehs ; then the scope of verbs and adjectives usage is shown in the form of tables and graphs . After that , we will assess the style of the literary work , based on the use of verbs and adjectives.   The research findings show : All Hamadani and Hariri Maqamat s quantitatively benefit from high -active approach, for the use of the verb . At 46 Maqameh Hamadani , and

  4. A methodology for physically based rockfall hazard assessment

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  5. Simulation environment based on the Universal Verification Methodology

    International Nuclear Information System (INIS)

    Fiergolski, A.

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.

  6. Consensus-based methodology for detection communities in multilayered networks

    Science.gov (United States)

    Karimi-Majd, Amir-Mohsen; Fathian, Mohammad; Makrehchi, Masoud

    2018-03-01

    Finding groups of network users who are densely related with each other has emerged as an interesting problem in the area of social network analysis. These groups or so-called communities would be hidden behind the behavior of users. Most studies assume that such behavior could be understood by focusing on user interfaces, their behavioral attributes or a combination of these network layers (i.e., interfaces with their attributes). They also assume that all network layers refer to the same behavior. However, in real-life networks, users' behavior in one layer may differ from their behavior in another one. In order to cope with these issues, this article proposes a consensus-based community detection approach (CBC). CBC finds communities among nodes at each layer, in parallel. Then, the results of layers should be aggregated using a consensus clustering method. This means that different behavior could be detected and used in the analysis. As for other significant advantages, the methodology would be able to handle missing values. Three experiments on real-life and computer-generated datasets have been conducted in order to evaluate the performance of CBC. The results indicate superiority and stability of CBC in comparison to other approaches.

  7. The effect of acute tyrosine phenylalanine depletion on emotion-based decision-making in healthy adults.

    Science.gov (United States)

    Vrshek-Schallhorn, Suzanne; Wahlstrom, Dustin; White, Tonya; Luciana, Monica

    2013-04-01

    Despite interest in dopamine's role in emotion-based decision-making, few reports of the effects of dopamine manipulations are available in this area in humans. This study investigates dopamine's role in emotion-based decision-making through a common measure of this construct, the Iowa Gambling Task (IGT), using Acute Tyrosine Phenylalanine Depletion (ATPD). In a between-subjects design, 40 healthy adults were randomized to receive either an ATPD beverage or a balanced amino acid beverage (a control) prior to completing the IGT, as well as pre- and post-manipulation blood draws for the neurohormone prolactin. Together with conventional IGT performance metrics, choice selections and response latencies were examined separately for good and bad choices before and after several key punishment events. Changes in response latencies were also used to predict total task performance. Prolactin levels increased significantly in the ATPD group but not in the control group. However, no significant group differences in performance metrics were detected, nor were there sex differences in outcome measures. However, the balanced group's bad deck latencies speeded up across the task, while the ATPD group's latencies remained adaptively hesitant. Additionally, modulation of latencies to the bad decks predicted total score for the ATPD group only. One interpretation is that ATPD subtly attenuated reward salience and altered the approach by which individuals achieved successful performance, without resulting in frank group differences in task performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Auto-reactive T cells revised. Overestimation based on methodology?

    DEFF Research Database (Denmark)

    Thorlacius-Ussing, Gorm; Sørensen, Jesper F; Wandall, Hans H

    2015-01-01

    . Thus, T cell antigen reactivities identified with unmodified antigens in vitro may in part represent in vitro T cell activation against neo-epitopes and not true in vivo autoreactivity as postulated. This methodological problem may have implications for the interpretation of the frequent reporting...... methodology applied to document T cell reactivity against unmodified protein or peptide may lead to overinterpretation of the reported frequencies of autoreactive CD4+ and CD8+ T cells....

  9. Reliability analysis for power supply system in a reprocessing facility based on GO methodology

    International Nuclear Information System (INIS)

    Wang Renze

    2014-01-01

    GO methodology was applied to analyze the reliability of power supply system in a typical reprocessing facility. Based on the fact that tie breakers are set in the system, tie breaker operator was defined. Then GO methodology modeling and quantitative analysis were performed sequently, minimal cut sets and average unavailability of the system were obtained. Parallel analysis between GO methodology and fault tree methodology was also performed. The results showed that setup of tie breakers was rational and necessary and that the modeling was much easier and the chart was much more succinct for GO methodology parallel with fault tree methodology to analyze the reliability of the power supply system. (author)

  10. Unbiased Selective Isolation of Protein N-Terminal Peptides from Complex Proteome Samples Using Phospho Tagging PTAG) and TiO2-based Depletion

    NARCIS (Netherlands)

    Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.

    2012-01-01

    A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with

  11. Lab Scale Study of the Depletion of Mullite/Corundum-Based Refractories Trough Reaction with Scaffold Materials

    International Nuclear Information System (INIS)

    Stjernberg, J; Antti, M-L; Ion, J C; Lindblom, B

    2011-01-01

    To investigate the mechanisms underlying the depletion of mullite/corundum-based refractory bricks used in rotary kilns for iron ore pellet production, the reaction mechanisms between scaffold material and refractory bricks have been studied on the laboratory-scale. Alkali additions were used to enhance the reaction rates between the materials. The morphological changes and active chemical reactions at the refractory/scaffold material interface in the samples were characterized using scanning electron microscopy (SEM), thermal analysis (TA) and X-ray diffraction (XRD). No reaction products of alkali and hematite (Fe 2 O 3 ) were detected; however, alkali dissolves the mullite in the bricks. Phases such as nepheline (Na 2 O·Al 2 O 3 ·2SiO 2 ), kalsilite (K 2 O·Al 2 O 3 ·2SiO 2 ), leucite (K 2 O·Al 2 O 3 ·4SiO 2 ) and potassium β-alumina (K 2 O·11Al 2 O 3 ) were formed as a consequence of reactions between alkali and the bricks.

  12. Sensor-based activity recognition using extended belief rule-based inference methodology.

    Science.gov (United States)

    Calzada, A; Liu, J; Nugent, C D; Wang, H; Martinez, L

    2014-01-01

    The recently developed extended belief rule-based inference methodology (RIMER+) recognizes the need of modeling different types of information and uncertainty that usually coexist in real environments. A home setting with sensors located in different rooms and on different appliances can be considered as a particularly relevant example of such an environment, which brings a range of challenges for sensor-based activity recognition. Although RIMER+ has been designed as a generic decision model that could be applied in a wide range of situations, this paper discusses how this methodology can be adapted to recognize human activities using binary sensors within smart environments. The evaluation of RIMER+ against other state-of-the-art classifiers in terms of accuracy, efficiency and applicability was found to be significantly relevant, specially in situations of input data incompleteness, and it demonstrates the potential of this methodology and underpins the basis to develop further research on the topic.

  13. Cernavoda NPP risk - Based test and maintenance planning - Methodology development

    International Nuclear Information System (INIS)

    Georgescu, G.; Popa, P.; Petrescu, A.; Naum, M.; Gutu, M.

    1997-01-01

    The Cernavoda Power Plant starts the commercial operation in November 1996. During operation of the nuclear power plant, several mandatory tests and maintenance are performed on stand-by safety system components to ensure their availability in case of accident. The basic purpose of such activities is the early detection of any failure and degradation, and timely correction of deteriorations. Because of the large number of such activities, emphasis on plant safety and allocation of resources becomes difficult. The probabilistic model and methodology can be effectively used to obtain the risk significance of these activities so that the resources are directed to the most important areas. The proposed Research Contract activity is strongly connected with other safety related areas under development. Since, the Cernavoda Probabilistic Safety Evaluation Level 1 PSA Study (CPSE) was performed and now the study is revised taking into account the as-built information, it is recommended to implement into the model the necessary modeling features to support further PSA application, especially related to Test and Maintenance optimization. Methods need to be developed in order to apply the PSA model including risk information together with other needed information for Test and Maintenance optimization. Also, in parallel with the CPSE study updating, the software interface for the PSA model is under development (Risk Monitor Software class), methods and models needing to be developed for the purpose of using it for qualified monitoring of Test and Maintenance Strategy efficiency. Similar, the Data Collection System need to be appropriate for the purpose of an ongoing implementation of a risk - based Test and Maintenance Strategy. (author). 4 refs, 1 fig

  14. Preparing for budget-based payment methodologies: global payment and episode-based payment.

    Science.gov (United States)

    Hudson, Mark E

    2015-10-01

    Use of budget-based payment methodologies (capitation and episode-based bundled payment) has been demonstrated to drive value in healthcare delivery. With a focus on high-volume, high-cost surgical procedures, inclusion of anaesthesiology services in these methodologies is likely. This review provides a summary of budget-based payment methodologies and practical information necessary for anaesthesiologists to prepare for participation in these programmes. Although few examples of anaesthesiologists' participation in these models exist, an understanding of the structure of these programmes and opportunities for participation are available. Prospective preparation in developing anaesthesiology-specific bundled payment profiles and early participation in pathway development associated with selected episodes of care are essential for successful participation as a gainsharing partner. With significant opportunity to contribute to care coordination and cost management, anaesthesiology can play an important role in budget-based payment programmes and should expect to participate as full gainsharing partners. Precise costing methodologies and accurate economic modelling, along with identification of quality management and cost control opportunities, will help identify participation opportunities and appropriate payment and gainsharing agreements. Anaesthesiology-specific examples with budget-based payment models are needed to help guide increased participation in these programmes.

  15. The multiphase flow system used in exploiting depleted reservoirs: water-based Micro-bubble drilling fluid

    International Nuclear Information System (INIS)

    Zheng Lihui; He Xiaoqing; Wang Xiangchun; Fu Lixia

    2009-01-01

    Water-based micro-bubble drilling fluid, which is used to exploit depleted reservoirs, is a complicated multiphase flow system that is composed of gas, water, oil, polymer, surfactants and solids. The gas phase is separate from bulk water by two layers and three membranes. They are 'surface tension reducing membrane', 'high viscosity layer', 'high viscosity fixing membrane', 'compatibility enhancing membrane' and 'concentration transition layer of liner high polymer (LHP) and surfactants' from every gas phase centre to the bulk water. 'Surface tension reducing membrane', 'high viscosity layer' and 'high viscosity fixing membrane' bond closely to pack air forming 'air-bag', 'compatibility enhancing membrane' and 'concentration transition layer of LHP and surfactants' absorb outside 'air-bag' to form 'incompact zone'. From another point of view, 'air-bag' and 'incompact zone' compose micro-bubble. Dynamic changes of 'incompact zone' enable micro-bubble to exist lonely or aggregate together, and lead the whole fluid, which can wet both hydrophilic and hydrophobic surface, to possess very high viscosity at an extremely low shear rate but to possess good fluidity at a higher shear rate. When the water-based micro-bubble drilling fluid encounters leakage zones, it will automatically regulate the sizes and shapes of the bubbles according to the slot width of fracture, the height of cavern as well as the aperture of openings, or seal them by making use of high viscosity of the system at a very low shear rate. Measurements of the rheological parameters indicate that water-based micro-bubble drilling fluid has very high plastic viscosity, yield point, initial gel, final gel and high ratio of yield point and plastic viscosity. All of these properties make the multiphase flow system meet the requirements of petroleum drilling industry. Research on interface between gas and bulk water of this multiphase flow system can provide us with information of synthesizing effective

  16. Base line definitions and methodological lessons from Zimbabwe

    International Nuclear Information System (INIS)

    Maya, R.S.

    1995-01-01

    The UNEP Greenhouse Gas Abatement Costing Studies carried out under the management of the UNEP Collaborating Centre On Energy and Environment at Risoe National Laboratories in Denmark has placed effort in generating methodological approaches to assessing the cost of abatement activities to reduce CO 2 emissions. These efforts have produced perhaps the most comprehensive set of methodological approaches to defining and assessing the cost of greenhouse gas abatement. Perhaps the most importance aspect of the UNEP study which involved teams of researchers from ten countries is the mix of countries in which the studies were conducted and hence the representation of views and concepts from researchers in these countries particularly those from developing countries namely, Zimbabwe, India, Venezuela, Brazil, Thailand and Senegal. Methodological lessons from Zimbabwe, therefore, would have benefited from the interactions with methodological experiences form the other participating countries. Methodological lessons from the Zimbabwean study can be placed in two categories. One relates to the modelling of tools to analyze economic trends and the various factors studied in order to determine the unit cost of CO 2 abatement. The other is the definition of factors influencing the levels of emissions reducible and those realised under specific economic trends. (au)

  17. Development of Proliferation Resistance Assessment Methodology Based on International Standards

    International Nuclear Information System (INIS)

    Lee, Yong Deok; Lee, Jung Won; Lee, Kwang Seok

    2009-03-01

    Proliferation resistance is one of the requirement to be met in GEN IV and INPRO for next generation nuclear energy system. Internationally, the evaluation methodology on PR had been already initiated from 1980, but the systematic development was started at 2000s. In Korea, for the export of nuclear energy system and the increase of international credibility and transparence of domestic nuclear system and fuel cycle development, the independent development of PR evaluation methodology was started in 2007 as a nuclear long term R and D project and the development is being performed for the model of PR evaluation methodology. In 1st year, comparative study of GEN-IV/INPRO, PR indicator development, quantification of indicator and evaluation model development, analysis of technology system and international technology development trend had been performed. In 2nd year, feasibility study of indicator, allowable limit of indicator, review of technical requirement of indicator were done. The results of PR evaluation must be applied in the beginning of conceptual design of nuclear system. Through the technology development of PR evaluation methodology, the methodology will be applied in the regulatory requirement for authorization and permission to be developed

  18. Methodological bases of innovative training of specialists in nanotechnology field

    Directory of Open Access Journals (Sweden)

    FIGOVSKY Oleg Lvovich

    2016-10-01

    Full Text Available The performance of innovative training system aimed at highly intellectual specialists in the area of nanotechnologies for Kazakhstan’s economy demands establishment and development of nanotechnological market in the country, teaching of innovative engineering combined with consistent research, integration of trained specialists with latest technologies and sciences at the international level. Methodological aspects of training competitive specialists for nanotechnological field are specific. The paper presents methodological principles of innovative training of specialists for science-intensive industry that were realized according to grant given by the Ministry of Education and Science of the Republic of Kazakhstan.

  19. Radiochemical Analysis Methodology for uranium Depletion Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Scatena-Wachel DE

    2007-01-09

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  20. Depleted uranium management alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.

  1. Depleted uranium management alternatives

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process

  2. Pedagogical support of competence formation: methodological bases and experimental context

    OpenAIRE

    NABIEV VALERY SHARIFYANOVICH

    2016-01-01

    The article considers the problem of competence approach methodological basis. It discusses the topical issues of organizing a holistic educational process. The article presents the original solutions created by the author and the results of experimental verification of the specified conditions of pedagogical maintenance of educational and training activities.

  3. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  4. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    Science.gov (United States)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  5. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  6. Transcriptome-based identification of pro- and antioxidative gene expression in kidney cortex of nitric oxide-depleted rats

    NARCIS (Netherlands)

    Wesseling, Sebastiaan; Joles, Jaap A.; van Goor, Harry; Bluyssen, Hans A.; Kemmeren, Patrick; Holstege, Frank C.; Koomans, Hein A.; Braam, Branko

    2007-01-01

    Nitric oxide (NO) depletion in rats induces severe endothelial dysfunction within 4 days. Subsequently, hypertension and renal injury develop, which are ameliorated by alpha-tocopherol (VitE) cotreatment. The hypothesis of the present study was that NO synthase (NOS) inhibition induces a renal

  7. EUROCONTROL-Systemic Occurrence Analysis Methodology (SOAM)-A 'Reason'-based organisational methodology for analysing incidents and accidents

    International Nuclear Information System (INIS)

    Licu, Tony; Cioran, Florin; Hayward, Brent; Lowe, Andrew

    2007-01-01

    The Safety Occurrence Analysis Methodology (SOAM) developed for EUROCONTROL is an accident investigation methodology based on the Reason Model of organisational accidents. The purpose of a SOAM is to broaden the focus of an investigation from human involvement issues, also known as 'active failures of operational personnel' under Reason's original model, to include analysis of the latent conditions deeper within the organisation that set the context for the event. Such an approach is consistent with the tenets of Just Culture in which people are encouraged to provide full and open information about how incidents occurred, and are not penalised for errors. A truly systemic approach is not simply a means of transferring responsibility for a safety occurrence from front-line employees to senior managers. A consistent philosophy must be applied, where the investigation process seeks to correct deficiencies wherever they may be found, without attempting to apportion blame or liability

  8. Theoretical and methodological bases of the cooperation and the cooperative

    Directory of Open Access Journals (Sweden)

    Claudio Alberto Rivera Rodríguez

    2013-12-01

    Full Text Available The present work has the purpose to approach the theoretical and methodological foundations of the rise of the cooperatives. In this article are studied the logical antecedents of the cooperativism, the premises  establish by  the Industrial Revolution for the emergence of the first modern cooperative “The Pioneers of Rochdale”  that  is  the inflection point of  cooperativism, until analyzing the contributions of the whole thinking  of the time that maintain this process.

  9. CRDIAC: Coupled Reactor Depletion Instrument with Automated Control

    International Nuclear Information System (INIS)

    Logan, Steven K.

    2012-01-01

    When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its depletion

  10. Methodological examination of UAR-based change detection

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Kiss, S. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1995-12-01

    A methodological examination was performed in order to investigate the applicability of the combination of the well-known Univariate AutoRegressive model and the classical binary SPRT method. The signal was recorded by a vibration detector fixed at a white-noise excited fuel rod. During the experiments, the following abnormality (or minor changes) were simulated: loosening of the detector, changes in the underlying system (constraints and the environment), rod impact. The residual time series were generated by an UAR model while the hypothesis testing was performed by a binary SPRT applied for checking the variation of the variance of the residual. Although the results are very promising, few disturbing effects were recognized also, which are as yet unexplained, therefore more careful application of this familiar combination is required. (author).

  11. Methodological examination of UAR-based change detection

    International Nuclear Information System (INIS)

    Racz, A.; Kiss, S.

    1994-07-01

    A methodological examination was performed in order to investigate the applicability of the combination of the well-known Univariate Auto Regressive (UAR) model and the classical binary Sequential Probability Ratio Testing (SPRT) method. The signal was recorded by a vibration detector fixed at a white-noise excited fuel rod. During the experiments, the following abnormality (or minor changes) were simulated: loosening of the detector, changes in the underlying system (constraints and the environment), rod impact. The residual time series were generated by an UAR model while the hypothesis testing was performed by a binary SPRT applied for checking the variation of the variance of the residual. Although the results are very promising, few disturbing effects were recognized also, which seem to be unexplained yet, therefore they need more careful application of this familiar combination. (author) 14 refs.; 21 figs.; 3 tabs

  12. Simulation Environment Based on the Universal Verification Methodology

    CERN Document Server

    AUTHOR|(SzGeCERN)697338

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC desi...

  13. Legal and methodological bases of comprehensive forensic enquiry of pornography

    Directory of Open Access Journals (Sweden)

    Berdnikov D.V.

    2016-03-01

    Full Text Available The article gives an analysis of the legal definition of pornography. The author identified descriptive and target criteria groups which are required for the analysis and analyses the content of descriptive criteria of pornography and the way how they should be documented. Fixing attention to the anatomical and physiological characteristics of the sexual relations is determine as necessary target criterion. It is noted that the term "pornography" is a legal and cannot be subject of expertise. That is why author underlined some methodological basis of complex psycho-linguistic and psycho-art expertise. The article presents general issue depends on expert conclusion and studies cases where the research is necessary to involve doctors, as well as criteria for expert's opinion. Besides that, author defined subject, object and main tasks of psychological studies of pornographic information.

  14. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one-b...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  15. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  16. Knowledge-based operation guidance system for nuclear power plants based on generic task methodology

    International Nuclear Information System (INIS)

    Yamada, Naoyuki; Chandrasekaran, B.; Bhatnager, R.

    1989-01-01

    A knowledge-based system for operation guidance of nuclear power plants is proposed. The Dynamic Procedure Management System (DPMS) is designed and developed to assist human operators interactively by selecting and modifying predefined operation procedures in a dynamic situation. Unlike most operation guidance systems, DPMS has been built based on Generic Task Methodology, which makes the overall framework of the system perspicuous and also lets domain knowledge be represented in a natural way. This paper describes the organization of the system, the definition of each task, and the form and organization of knowledge, followed by an application example. (author)

  17. Development of proliferation resistance assessment methodology based on international standard

    International Nuclear Information System (INIS)

    Ko, W. I.; Chang, H. L.; Lee, Y. D.; Lee, J. W.; Park, J. H.; Kim, Y. I.; Ryu, J. S.; Ko, H. S.; Lee, K. W.

    2012-04-01

    Nonproliferation is one of the main requirements to be satisfied by the advanced future nuclear energy systems that have been developed in the Generation IV and INPRO studies. The methodologies to evaluate proliferation resistance has been developed since 1980s, however, the systematic evaluation approach has begun from around 2000. Domestically a study to develop national method to evaluate proliferation resistance (PR) of advanced future nuclear energy systems has started in 2007 as one of the long-term nuclear R and D subjects in order to promote export and international credibility and transparency of national nuclear energy systems and nuclear fuel cycle technology development program. In the first phase (2007-2010) development and improvement of intrinsic evaluation parameters for the evaluation of proliferation resistance, quantification of evaluation parameters, development of evaluation models, and development of permissible ranges of evaluation parameters have been carried out. In the second phase (2010-2012) generic principle of to evaluate PR was established, and techincal guidelines, nuclear material diversion pathway analysis method, and a method to integrate evaluation parameters have been developed. which were applied to 5 alternative nuclear fuel cycles to estimate their appicability and objectivity. In addition, measures to enhance PR of advanced future nuclear energy systems and technical guidelines of PR assessment using intrinsic PR evaluation parameters were developed. Lastly, requlatory requirements to secure nonproliferation requirements of nuclear energy systems from the early design stage, operation and to decommissioning which will support the export of newly developed advanced future nuclear energy system

  18. Investigation of p-type depletion doping for InGaN/GaN-based light-emitting diodes

    Science.gov (United States)

    Zhang, Yiping; Zhang, Zi-Hui; Tan, Swee Tiam; Hernandez-Martinez, Pedro Ludwig; Zhu, Binbin; Lu, Shunpeng; Kang, Xue Jun; Sun, Xiao Wei; Demir, Hilmi Volkan

    2017-01-01

    Due to the limitation of the hole injection, p-type doping is essential to improve the performance of InGaN/GaN multiple quantum well light-emitting diodes (LEDs). In this work, we propose and show a depletion-region Mg-doping method. Here we systematically analyze the effectiveness of different Mg-doping profiles ranging from the electron blocking layer to the active region. Numerical computations show that the Mg-doping decreases the valence band barrier for holes and thus enhances the hole transportation. The proposed depletion-region Mg-doping approach also increases the barrier height for electrons, which leads to a reduced electron overflow, while increasing the hole concentration in the p-GaN layer. Experimentally measured external quantum efficiency indicates that Mg-doping position is vitally important. The doping in or adjacent to the quantum well degrades the LED performance due to Mg diffusion, increasing the corresponding nonradiative recombination, which is well supported by the measured carrier lifetimes. The experimental results are well numerically reproduced by modifying the nonradiative recombination lifetimes, which further validate the effectiveness of our approach.

  19. Depleted fully monolithic CMOS pixel detectors using a column based readout architecture for the ATLAS Inner Tracker upgrade

    Science.gov (United States)

    Wang, T.; Barbero, M.; Berdalovic, I.; Bespin, C.; Bhat, S.; Breugnon, P.; Caicedo, I.; Cardella, R.; Chen, Z.; Degerli, Y.; Egidos, N.; Godiot, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Krüger, H.; Kugathasan, T.; Hügging, F.; Marin Tobon, C. A.; Moustakas, K.; Pangaud, P.; Schwemling, P.; Pernegger, H.; Pohl, D.-L.; Rozanov, A.; Rymaszewski, P.; Snoeys, W.; Wermes, N.

    2018-03-01

    Depleted monolithic active pixel sensors (DMAPS), which exploit high voltage and/or high resistivity add-ons of modern CMOS technologies to achieve substantial depletion in the sensing volume, have proven to have high radiation tolerance towards the requirements of ATLAS in the high-luminosity LHC era. DMAPS integrating fast readout architectures are currently being developed as promising candidates for the outer pixel layers of the future ATLAS Inner Tracker, which will be installed during the phase II upgrade of ATLAS around year 2025. In this work, two DMAPS prototype designs, named LF-Monopix and TJ-Monopix, are presented. LF-Monopix was fabricated in the LFoundry 150 nm CMOS technology, and TJ-Monopix has been designed in the TowerJazz 180 nm CMOS technology. Both chips employ the same readout architecture, i.e. the column drain architecture, whereas different sensor implementation concepts are pursued. The paper makes a joint description of the two prototypes, so that their technical differences and challenges can be addressed in direct comparison. First measurement results for LF-Monopix will also be shown, demonstrating for the first time a fully functional fast readout DMAPS prototype implemented in the LFoundry technology.

  20. A methodology to assess the contribution of biorefineries to a sustainable bio-based economy

    International Nuclear Information System (INIS)

    Maga, Daniel

    2015-01-01

    Within this thesis for the first time an integrative methodology to assess the sustainability of biorefineries and bio-based products has been developed which is based on a fundamental understanding of sustainability as presented in the Brundtland report. The applied integrative concept of sustainability as developed by the Institute for Technology Assessment and Systems Analysis (ITAS) overcomes the widespread thinking in three pillars of sustainability and opens up new perspectives. The methodology developed addresses innovative life cycle assessment evaluation methods on midpoint level as well as on the area of protection and adopts state-of-the-art assessment procedures e.g. to determine water deprivation. It goes far beyond the scope of conventional LCA studies and examines effects on human health, on the environment, on the development of knowledge and physical capital, and on regional development and acceptance. In order to validate the developed method it was applied to an algae biorefinery currently under development and construction in the south of Spain. For this assessment for the first time extensive process data was collected of a real algae biorefinery which uses municipal waste water as a culture medium for microalgae. The use of waste water allows to reduce the demand for fresh water and avoids additional fertilisation of microalgae. Moreover, the analysed algae biorefinery replaces conventional waste water treatment by a biological purification and produces biogas by an anaerobic pretreatment of waste water as well as by anaerobic digestion of algae. After several purification steps the biogas can be used as automotive fuel and thus contributes to further development and increased use of biofuels. On the one hand the sustainability assessment shows that this way of waste water treatment contributes to climate protection and to the conservation of fossil energy carrier. On the other hand approximately ten times more land is needed and twenty times

  1. A methodology to assess the contribution of biorefineries to a sustainable bio-based economy

    Energy Technology Data Exchange (ETDEWEB)

    Maga, Daniel

    2015-07-01

    Within this thesis for the first time an integrative methodology to assess the sustainability of biorefineries and bio-based products has been developed which is based on a fundamental understanding of sustainability as presented in the Brundtland report. The applied integrative concept of sustainability as developed by the Institute for Technology Assessment and Systems Analysis (ITAS) overcomes the widespread thinking in three pillars of sustainability and opens up new perspectives. The methodology developed addresses innovative life cycle assessment evaluation methods on midpoint level as well as on the area of protection and adopts state-of-the-art assessment procedures e.g. to determine water deprivation. It goes far beyond the scope of conventional LCA studies and examines effects on human health, on the environment, on the development of knowledge and physical capital, and on regional development and acceptance. In order to validate the developed method it was applied to an algae biorefinery currently under development and construction in the south of Spain. For this assessment for the first time extensive process data was collected of a real algae biorefinery which uses municipal waste water as a culture medium for microalgae. The use of waste water allows to reduce the demand for fresh water and avoids additional fertilisation of microalgae. Moreover, the analysed algae biorefinery replaces conventional waste water treatment by a biological purification and produces biogas by an anaerobic pretreatment of waste water as well as by anaerobic digestion of algae. After several purification steps the biogas can be used as automotive fuel and thus contributes to further development and increased use of biofuels. On the one hand the sustainability assessment shows that this way of waste water treatment contributes to climate protection and to the conservation of fossil energy carrier. On the other hand approximately ten times more land is needed and twenty times

  2. Deuterium-depleted water

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Steflea, Dumitru; Saros-Rogobete, Irina; Titescu, Gheorghe; Tamaian, Radu

    2001-01-01

    Deuterium-depleted water represents water that has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Deuterium depleted water is produced by vacuum distillation in columns equipped with structured packing made from phosphor bronze or stainless steel. Deuterium-depleted water, the production technique and structured packing are patents of National Institute of Research - Development for Cryogenics and Isotopic Technologies at Rm. Valcea. Researches made in the last few years showed the deuterium-depleted water is a biological active product that could have many applications in medicine and agriculture. (authors)

  3. Integrated vehicle-based safety systems light-vehicle field operational test, methodology and results report.

    Science.gov (United States)

    2010-12-01

    "This document presents the methodology and results from the light-vehicle field operational test conducted as part of the Integrated Vehicle-Based Safety Systems program. These findings are the result of analyses performed by the University of Michi...

  4. Drag &Drop, Mixed-Methodology-based Lab-on-Chip Design Optimization Software, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The overall objective is to develop a ?mixed-methodology?, drag and drop, component library (fluidic-lego)-based, system design and optimization tool for complex...

  5. An Efficient Power Estimation Methodology for Complex RISC Processor-based Platforms

    OpenAIRE

    Rethinagiri , Santhosh Kumar; Ben Atitallah , Rabie; Dekeyser , Jean-Luc; Niar , Smail; Senn , Eric

    2012-01-01

    International audience; In this contribution, we propose an efficient power estima- tion methodology for complex RISC processor-based plat- forms. In this methodology, the Functional Level Power Analysis (FLPA) is used to set up generic power models for the different parts of the system. Then, a simulation framework based on virtual platform is developed to evalu- ate accurately the activities used in the related power mod- els. The combination of the two parts above leads to a het- erogeneou...

  6. ICT-Based, Cross-Cultural Communication: A Methodological Perspective

    Science.gov (United States)

    Larsen, Niels; Bruselius-Jensen, Maria; Danielsen, Dina; Nyamai, Rachael; Otiende, James; Aagaard-Hansen, Jens

    2014-01-01

    The article discusses how cross-cultural communication based on information and communication technologies (ICT) may be used in participatory health promotion as well as in education in general. The analysis draws on experiences from a health education research project with grade 6 (approx. 12 years) pupils in Nairobi (Kenya) and Copenhagen…

  7. Comparing econometric and survey-based methodologies in measuring offshoring

    DEFF Research Database (Denmark)

    Refslund, Bjarke

    2016-01-01

    such as the national or regional level. Most macro analyses are based on proxies and trade statistics with limitations. Drawing on unique Danish survey data, this article demonstrates how survey data can provide important insights into the national scale and impacts of offshoring, including changes of employment...

  8. Studying boat-based bear viewing: Methodological challenges and solutions

    Science.gov (United States)

    Sarah Elmeligi

    2007-01-01

    Wildlife viewing, a growing industry throughout North America, holds much potential for increased revenue and public awareness regarding species conservation. In Alaska and British Columbia, grizzly bear (Ursus arctos) viewing is becoming more popular, attracting tourists from around the world. Viewing is typically done from a land-based observation...

  9. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  10. Relational and Object-Oriented Methodology in Data Bases Systems

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2006-01-01

    Full Text Available Database programming languages integrate concepts of databases and programming languages to provide both implementation tools for data-intensive applications and high-level user interfaces to databases. Frequently, database programs contain a large amount of application knowledge which is hidden in the procedural code and thus difficult to maintain with changing data and user views. This paper presents a first attempt to improve the situation by supporting the integrated definition and management of data and rules based on a setoriented and predicative approach. The use of database technology for integrated fact and rule base management is shown to have some important advantages in terms of fact and rule integrity, question-answering, and explanation of results.

  11. Biomarkers of Acute Stroke Etiology (BASE) Study Methodology.

    Science.gov (United States)

    Jauch, Edward C; Barreto, Andrew D; Broderick, Joseph P; Char, Doug M; Cucchiara, Brett L; Devlin, Thomas G; Haddock, Alison J; Hicks, William J; Hiestand, Brian C; Jickling, Glen C; June, Jeff; Liebeskind, David S; Lowenkopf, Ted J; Miller, Joseph B; O'Neill, John; Schoonover, Tim L; Sharp, Frank R; Peacock, W Frank

    2017-05-05

    Acute ischemic stroke affects over 800,000 US adults annually, with hundreds of thousands more experiencing a transient ischemic attack. Emergent evaluation, prompt acute treatment, and identification of stroke or TIA (transient ischemic attack) etiology for specific secondary prevention are critical for decreasing further morbidity and mortality of cerebrovascular disease. The Biomarkers of Acute Stroke Etiology (BASE) study is a multicenter observational study to identify serum markers defining the etiology of acute ischemic stroke. Observational trial of patients presenting to the hospital within 24 h of stroke onset. Blood samples are collected at arrival, 24, and 48 h later, and RNA gene expression is utilized to identify stroke etiology marker candidates. The BASE study began January 2014. At the time of writing, there are 22 recruiting sites. Enrollment is ongoing, expected to hit 1000 patients by March 2017. The BASE study could potentially aid in focusing the initial diagnostic evaluation to determine stroke etiology, with more rapidly initiated targeted evaluations and secondary prevention strategies.Clinical Trial Registration Clinicaltrials.gov NCT02014896 https://clinicaltrials.gov/ct2/show/NCT02014896?term=biomarkers+of+acute+stroke+etiology&rank=1.

  12. Ozone Depletion Caused by Rocket Engine Emissions: A Fundamental Limit on the Scale and Viability of Space-Based Geoengineering Schemes

    Science.gov (United States)

    Ross, M. N.; Toohey, D.

    2008-12-01

    Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our

  13. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  14. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G.; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E.; Wilkinson, Mark D.

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be “FAIR”—Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences—the Pathogen-Host Interaction Database (PHI-base)—to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  15. Publishing FAIR Data: an exemplar methodology utilizing PHI-base

    Directory of Open Access Journals (Sweden)

    Alejandro eRodríguez Iglesias

    2016-05-01

    Full Text Available Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species versus the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be FAIR - Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences - the Pathogen-Host Interaction Database (PHI-base - to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  16. JOB SHOP METHODOLOGY BASED ON AN ANT COLONY

    Directory of Open Access Journals (Sweden)

    OMAR CASTRILLON

    2009-01-01

    Full Text Available The purpose of this study is to reduce the total process time (Makespan and to increase the machines working time, in a job shop environment, using a heuristic based on ant colony optimization. This work is developed in two phases: The first stage describes the identification and definition of heuristics for the sequential processes in the job shop. The second stage shows the effectiveness of the system in the traditional programming of production. A good solution, with 99% efficiency is found using this technique.

  17. Application of risk-based methodologies to prioritize safety resources

    International Nuclear Information System (INIS)

    Rahn, F.J.; Sursock, J.P.; Hosler, J.

    1993-01-01

    The Electric Power Research Institute (EPRI) started a program entitled risk-based prioritization in 1992. The purpose of this program is to provide generic technical support to the nuclear power industry relative to its recent initiatives in the area of operations and maintenance (O ampersand M) cost control using state-of-the-art risk methods. The approach uses probabilistic risk assessment (PRA), or similar techniques, to allocate resources commensurate with the risk posed by nuclear plant operations. Specifically, those items or events that have high risk significance would receive the most attention, while those with little risk content would command fewer resources. As quantified in a companion paper,close-quote the potential O ampersand M cost reduction inherent in this approach is very large. Furthermore, risk-based methods should also lead to safety improvements. This paper outlines the way that the EPRI technical work complements the technical, policy, and regulatory initiatives taken by others in the industry and provides an example of the approach as used to prioritize motor-operated valve (MOV) testing in response to US Nuclear Regulatory Commission (NRC) Generic Letter 89-10

  18. Kinetics of depletion interactions

    NARCIS (Netherlands)

    Vliegenthart, G.A.; Schoot, van der P.P.A.M.

    2003-01-01

    Depletion interactions between colloidal particles dispersed in a fluid medium are effective interactions induced by the presence of other types of colloid. They are not instantaneous but built up in time. We show by means of Brownian dynamics simulations that the static (mean-field) depletion force

  19. Methodology for risk-based analysis of technical specifications

    International Nuclear Information System (INIS)

    Vesely, W.E.; Gaertner, J.P.; Wagner, D.P.

    1985-01-01

    Part of the effort by EPRI to apply probabilistic risk assessment methods and results to the solution of utility problems involves the investigation of methods for risk-based analysis of technical specifications. The culmination of this investigation is the SOCRATES computer code developed by Battelle's Columbus Laboratories to assist in the evaluation of technical specifications of nuclear power plants. The program is designed to use information found in PRAs to re-evaluate risk for changes in component allowed outage times (AOTs) and surveillance test intervals (STIs). The SOCRATES program is a unique and important tool for technical specification evaluations. The detailed component unavailability model allows a detailed analysis of AOT and STI contributions to risk. Explicit equations allow fast and inexpensive calculations. Because the code is designed to accept ranges of parameters and to save results of calculations that do not change during the analysis, sensitivity studies are efficiently performed and results are clearly displayed

  20. Management of depleted uranium

    International Nuclear Information System (INIS)

    2001-01-01

    Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)

  1. Advanced TEM Characterization for the Development of 28-14nm nodes based on fully-depleted Silicon-on-Insulator Technology

    International Nuclear Information System (INIS)

    Servanton, G; Clement, L; Lepinay, K; Lorut, F; Pantel, R; Pofelski, A; Bicais, N

    2013-01-01

    The growing demand for wireless multimedia applications (smartphones, tablets, digital cameras) requires the development of devices combining both high speed performances and low power consumption. A recent technological breakthrough making a good compromise between these two antagonist conditions has been proposed: the 28-14nm CMOS transistor generations based on a fully-depleted Silicon-on-Insulator (FD-SOI) performed on a thin Si film of 5-6nm. In this paper, we propose to review the TEM characterization challenges that are essential for the development of extremely power-efficient System on Chip (SoC)

  2. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments

  3. A solar reserve methodology for renewable energy integration studies based on sub-hourly variability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez, Eduardo; Brinkman, Gregory; Hummon, Marissa [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lew, Debra

    2012-07-01

    Increasing penetration of wind and solar energy are raising concerns among electric system operators because of the variability and uncertainty associated with the power sources. Previous work focused on the quantification of reserves for systems with wind power. This paper presents a new methodology that allows the determination of necessary reserves for high penetrations of photovoltaic power and compares it to the wind-based methodology. The solar reserve methodology was applied to Phase 2 of the Western Wind and Solar Integration Study. A summary of the results is included. (orig.)

  4. Systematic screening methodology and energy efficient design of ionic liquid-based separation processes

    DEFF Research Database (Denmark)

    Kulajanpeng, Kusuma; Suriyapraphadilok, Uthaiporn; Gani, Rafiqul

    2016-01-01

    in size of the target solute was investigated using the same separation process and IL entrainer to obtain the same product purity. The proposed methodology has been evaluated through a case study of binary alcoholic aqueous azeotropic separation: water+ethanol and water+isopropanol.......A systematic methodology for the screening of ionic liquids (ILs) as entrainers and for the design of ILs-based separation processes in various homogeneous binary azeotropic mixtures has been developed. The methodology focuses on the homogeneous binary aqueous azeotropic systems (for example, water...

  5. A methodology for sunlight urban planning: a computer-based solar and sky vault obstruction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Fernando Oscar Ruttkay; Silva, Carlos Alejandro Nome [Federal Univ. of Santa Catarina (UFSC), Dept. of Architecture and Urbanism, Florianopolis, SC (Brazil); Turkienikz, Benamy [Federal Univ. of Rio Grande do Sul (UFRGS), Faculty of Architecture, Porto Alegre, RS (Brazil)

    2001-07-01

    The main purpose of the present study is to describe a planning methodology to improve the quality of the built environment based on the rational control of solar radiation and the view of the sky vault. The main criterion used to control the access and obstruction of solar radiation was the concept of desirability and undesirability of solar radiation. A case study for implementing the proposed methodology is developed. Although needing further developments to find its way into regulations and practical applications, the methodology has shown a strong potential to deal with an aspect that otherwise would be almost impossible. (Author)

  6. Fuzzy Clustering based Methodology for Multidimensional Data Analysis in Computational Forensic Domain

    OpenAIRE

    Kilian Stoffel; Paul Cotofrei; Dong Han

    2012-01-01

    As interdisciplinary domain requiring advanced and innovative methodologies the computational forensics domain is characterized by data being simultaneously large scaled and uncertain multidimensional and approximate. Forensic domain experts trained to discover hidden pattern from crime data are limited in their analysis without the assistance of a computational intelligence approach. In this paper a methodology and an automatic procedure based on fuzzy set theory and designed to infer precis...

  7. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  8. State of the art in HGPT (Heuristically Based Generalized Perturbation) methodology

    International Nuclear Information System (INIS)

    Gandini, A.

    1993-01-01

    A distinctive feature of heuristically based generalized perturbation theory (HGPT) methodology consists in the systematic use of importance conservation concepts. As well known, this use leads to fundamental reciprocity relationships from which perturbation, or sensitivity, expressions can be derived. The state of the art of the HGPT methodology is here illustrated. The application to a number of specific nonlinear fields of interest is commented. (author)

  9. D2.1 - An EA Active, Problem Based Learning Methodology - EAtrain2

    DEFF Research Database (Denmark)

    Ryberg, Thomas; Georgsen, Marianne; Buus, Lillian

    This deliverable reports on the work undertaken in work package 2 with the key objective to develop a learning methodology for web 2.0 mediated Enterprise Architecture (EA) learning building on a problem based learning (PBL) approach. The deliverable reports not only on the methodology but also...... on the activities leading to its development (literature review, workshops, etc.) and on further outcomes of this work relevant to the platform specification and pilot courses preparation....

  10. Design methodology for bio-based processing: Biodiesel and fatty alcohol production

    DEFF Research Database (Denmark)

    Simasatikul, Lida; Arpornwichanop, Amornchai; Gani, Rafiqul

    2012-01-01

    A systematic design methodology is developed for producing two main products plus side products starting with one or more bio-based renewable source. A superstructure that includes all possible reaction and separation operations is generated through thermodynamic insights and available data. The ....... Economic analysis and net present value are determined to find the best economically and operationally feasible process. The application of the methodology is presented through a case study involving biodiesel and fatty alcohol productions....

  11. Design methodology for bio-based processing: Biodiesel and fatty alcohol production

    DEFF Research Database (Denmark)

    Simasatikul, Lida; Arpornwichanopa, Amornchai; Gani, Rafiqul

    2013-01-01

    A systematic design methodology is developed for producing multiple main products plus side products starting with one or more bio-based renewable source. A superstructure that includes all possible reaction and separation operations is generated through thermodynamic insights and available data........ Economic analysis and net present value are determined to find the best economically and operationally feasible process. The application of the methodology is presented through a case study involving biodiesel and fatty alcohol productions....

  12. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  13. Problems and Issues in Using Computer- Based Support Tools to Enhance 'Soft' Systems Methodologies

    Directory of Open Access Journals (Sweden)

    Mark Stansfield

    2001-11-01

    Full Text Available This paper explores the issue of whether computer-based support tools can enhance the use of 'soft' systems methodologies as applied to real-world problem situations. Although work has been carried out by a number of researchers in applying computer-based technology to concepts and methodologies relating to 'soft' systems thinking such as Soft Systems Methodology (SSM, such attempts appear to be still in their infancy and have not been applied widely to real-world problem situations. This paper will highlight some of the problems that may be encountered in attempting to develop computer-based support tools for 'soft' systems methodologies. Particular attention will be paid to an attempt by the author to develop a computer-based support tool for a particular 'soft' systems method of inquiry known as the Appreciative Inquiry Method that is based upon Vickers' notion of 'appreciation' (Vickers, 196S and Checkland's SSM (Checkland, 1981. The final part of the paper will explore some of the lessons learnt from developing and applying the computer-based support tool to a real world problem situation, as well as considering the feasibility of developing computer-based support tools for 'soft' systems methodologies. This paper will put forward the point that a mixture of manual and computer-based tools should be employed to allow a methodology to be used in an unconstrained manner, but the benefits provided by computer-based technology should be utilised in supporting and enhancing the more mundane and structured tasks.

  14. DInSAR-Based Detection of Land Subsidence and Correlation with Groundwater Depletion in Konya Plain, Turkey

    Directory of Open Access Journals (Sweden)

    Fabiana Caló

    2017-01-01

    Full Text Available In areas where groundwater overexploitation occurs, land subsidence triggered by aquifer compaction is observed, resulting in high socio-economic impacts for the affected communities. In this paper, we focus on the Konya region, one of the leading economic centers in the agricultural and industrial sectors in Turkey. We present a multi-source data approach aimed at investigating the complex and fragile environment of this area which is heavily affected by groundwater drawdown and ground subsidence. In particular, in order to analyze the spatial and temporal pattern of the subsidence process we use the Small BAseline Subset DInSAR technique to process two datasets of ENVISAT SAR images spanning the 2002–2010 period. The produced ground deformation maps and associated time-series allow us to detect a wide land subsidence extending for about 1200 km2 and measure vertical displacements reaching up to 10 cm in the observed time interval. DInSAR results, complemented with climatic, stratigraphic and piezometric data as well as with land-cover changes information, allow us to give more insights on the impact of climate changes and human activities on groundwater resources depletion and land subsidence.

  15. A calculational procedure for neutronic and depletion analysis of Molten-Salt reactors based on SCALE6/TRITON

    International Nuclear Information System (INIS)

    Sheu, R.J.; Chang, J.S.; Liu, Y.-W. H.

    2011-01-01

    Molten-Salt Reactors (MSRs) represent one of the selected categories in the GEN-IV program. This type of reactor is distinguished by the use of liquid fuel circulating in and out of the core, which makes it possible for online refueling and salt processing. However, this operation characteristic also complicates the modeling and simulation of reactor core behaviour using conventional neutronic codes. The TRITON sequence in the SCALE6 code system has been designed to provide the combined capabilities of problem-dependent cross-section processing, rigorous treatment of neutron transport, and coupled with the ORIGEN-S depletion calculations. In order to accommodate the simulation of dynamic refueling and processing scheme, an in-house program REFRESH together with a run script are developed for carrying out a series of stepwise TRITON calculations, that makes the work of analyzing the neutronic properties and performance of a MSR core design easier. As a demonstration and cross check, we have applied this method to reexamine the conceptual design of Molten Salt Actinide Recycler & Transmuter (MOSART). This paper summarizes the development of the method and preliminary results of its application on MOSART. (author)

  16. Halo Star Lithium Depletion

    International Nuclear Information System (INIS)

    Pinsonneault, M. H.; Walker, T. P.; Steigman, G.; Narayanan, Vijay K.

    1999-01-01

    The depletion of lithium during the pre-main-sequence and main-sequence phases of stellar evolution plays a crucial role in the comparison of the predictions of big bang nucleosynthesis with the abundances observed in halo stars. Previous work has indicated a wide range of possible depletion factors, ranging from minimal in standard (nonrotating) stellar models to as much as an order of magnitude in models that include rotational mixing. Recent progress in the study of the angular momentum evolution of low-mass stars permits the construction of theoretical models capable of reproducing the angular momentum evolution of low-mass open cluster stars. The distribution of initial angular momenta can be inferred from stellar rotation data in young open clusters. In this paper we report on the application of these models to the study of lithium depletion in main-sequence halo stars. A range of initial angular momenta produces a range of lithium depletion factors on the main sequence. Using the distribution of initial conditions inferred from young open clusters leads to a well-defined halo lithium plateau with modest scatter and a small population of outliers. The mass-dependent angular momentum loss law inferred from open cluster studies produces a nearly flat plateau, unlike previous models that exhibited a downward curvature for hotter temperatures in the 7Li-Teff plane. The overall depletion factor for the plateau stars is sensitive primarily to the solar initial angular momentum used in the calibration for the mixing diffusion coefficients. Uncertainties remain in the treatment of the internal angular momentum transport in the models, and the potential impact of these uncertainties on our results is discussed. The 6Li/7Li depletion ratio is also examined. We find that the dispersion in the plateau and the 6Li/7Li depletion ratio scale with the absolute 7Li depletion in the plateau, and we use observational data to set bounds on the 7Li depletion in main-sequence halo

  17. Methodological Aspects of Building Science-based Sports Training System for Taekwondo Sportsmen

    Directory of Open Access Journals (Sweden)

    Ananchenko Konstantin

    2016-10-01

    Full Text Available The authors have solved topical scientific problems in the article: 1 the research base in the construction of theoretical and methodological foundations of sports training, based on taekwondo has been analysed; 2 the organization and methodological requirements for the training sessions of taekwondo have been researched; 3 the necessity of interaction processes of natural development and adaptation to physical activity of young taekwondo sportsmen has been grounded; 4 the necessity of scientific evidence of building young fighters training loads in microcycles, based on their individualization has been proved.

  18. Predicting Dissertation Methodology Choice among Doctoral Candidates at a Faith-Based University

    Science.gov (United States)

    Lunde, Rebecca

    2017-01-01

    Limited research has investigated dissertation methodology choice and the factors that contribute to this choice. Quantitative research is based in mathematics and scientific positivism, and qualitative research is based in constructivism. These underlying philosophical differences posit the question if certain factors predict dissertation…

  19. A Step-by-Step Design Methodology for a Base Case Vanadium Redox-Flow Battery

    Science.gov (United States)

    Moore, Mark; Counce, Robert M.; Watson, Jack S.; Zawodzinski, Thomas A.; Kamath, Haresh

    2012-01-01

    The purpose of this work is to develop an evolutionary procedure to be used by Chemical Engineering students for the base-case design of a Vanadium Redox-Flow Battery. The design methodology is based on the work of Douglas (1985) and provides a profitability analysis at each decision level so that more profitable alternatives and directions can be…

  20. AREVA main steam line break fully coupled methodology based on CATHARE-ARTEMIS - 15496

    International Nuclear Information System (INIS)

    Denis, L.; Jasserand, L.; Tomatis, D.; Segond, M.; Royere, C.; Sauvage, J.Y.

    2015-01-01

    The CATHARE code developed since 1979 by AREVA, CEA, EDF and IRSN is one of the major thermal-hydraulic system codes worldwide. In order to have at disposal realistic methodologies based on CATHARE for the whole transient and accident analysis in Chapter 15 of Safety Reports, a coupling with the code ARTEMIS was developed. ARTEMIS is the core code in AREVA's new reactor simulator system ARCADIA, using COBRA-FLX to model the thermal-hydraulics in the core. The Fully Coupled Methodology was adapted to the CATHARE-ARTEMIS coupling to perform Main Steam Line Break studies. This methodology, originally applied to the MANTA-SMART-FLICA coupling, is dedicated to Main Steam Line Break transients at zero power. The aim of this paper is to present the coupling between CATHARE and ARTEMIS and the application of the Fully Coupled Methodology in a different code environment. (authors)

  1. The multi-copy simultaneous search methodology: a fundamental tool for structure-based drug design.

    Science.gov (United States)

    Schubert, Christian R; Stultz, Collin M

    2009-08-01

    Fragment-based ligand design approaches, such as the multi-copy simultaneous search (MCSS) methodology, have proven to be useful tools in the search for novel therapeutic compounds that bind pre-specified targets of known structure. MCSS offers a variety of advantages over more traditional high-throughput screening methods, and has been applied successfully to challenging targets. The methodology is quite general and can be used to construct functionality maps for proteins, DNA, and RNA. In this review, we describe the main aspects of the MCSS method and outline the general use of the methodology as a fundamental tool to guide the design of de novo lead compounds. We focus our discussion on the evaluation of MCSS results and the incorporation of protein flexibility into the methodology. In addition, we demonstrate on several specific examples how the information arising from the MCSS functionality maps has been successfully used to predict ligand binding to protein targets and RNA.

  2. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  3. ORGANIZATION OF FUTURE ENGINEERS' PROJECT-BASED LEARNING WHEN STUDYING THE PROJECT MANAGEMENT METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Halyna V. Lutsenko

    2015-02-01

    Full Text Available The peculiarities of modern world experience of implementation of project-based learning in engineering education have been considered. The potential role and place of projects in learning activity have been analyzed. The methodology of organization of project-based activity of engineering students when studying the project management methodology and computer systems of project management has been proposed. The requirements to documentation and actual results of students' projects have been described in detail. The requirements to computer-aided systems of project management developed by using Microsoft Project in the scope of diary scheduling and resources planning have been formulated.

  4. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    Science.gov (United States)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  5. A neural network based methodology to predict site-specific spectral acceleration values

    Science.gov (United States)

    Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.

    2010-12-01

    A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.

  6. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  7. Recurrence formulas for evaluating expansion series of depletion functions

    International Nuclear Information System (INIS)

    Vukadin, Z.

    1991-01-01

    A high-accuracy analytical method for solving the depletion equations for chains of radioactive nuclides is based on the formulation of depletion functions. When all the arguments of the depletion function are too close to each other, series expansions of the depletion function have to be used. However, the high-accuracy series expressions for the depletion functions of high index become too complicated. Recursion relations are derived which enable an efficient high-accuracy evaluation of the depletion functions with high indices. (orig.) [de

  8. The Integration of Project-Based Methodology into Teaching in Machine Translation

    Science.gov (United States)

    Madkour, Magda

    2016-01-01

    This quantitative-qualitative analytical research aimed at investigating the effect of integrating project-based teaching methodology into teaching machine translation on students' performance. Data was collected from the graduate students in the College of Languages and Translation, at Imam Muhammad Ibn Saud Islamic University, Riyadh, Saudi…

  9. On endocytoscopy and posttherapy pathologic staging in esophageal cancers, and on evidence-based methodology.

    Science.gov (United States)

    Chao, Yin-Kai; Kawada, Kenro; Kumagai, Youichi; Takubo, Kaiyo; Wang, Helen H

    2014-09-01

    The following, from the 12th OESO World Conference: Cancers of the Esophagus, includes commentaries on the value of endocytoscopy to replace biopsy histology for squamous cell carcinoma and the clinical significance of posttherapy pathologic stage in patients with esophageal adenocarcinoma following preoperative chemoradiation; a short discussion of evidence-based methodology is also included. © 2014 New York Academy of Sciences.

  10. The Idea of National HRD: An Analysis Based on Economics and Theory Development Methodology

    Science.gov (United States)

    Wang, Greg G.; Swanson, Richard A.

    2008-01-01

    Recent human resource development (HRD) literature focuses attention on national HRD (NHRD) research and represents problems in both HRD identity and research methodology. Based on a review of development economics and international development literature, this study analyzes the existing NHRD literature with respect to the theory development…

  11. EA Training 2.0 Newsletter #3 - EA Active, Problem Based Learning Methodology

    DEFF Research Database (Denmark)

    Buus, Lillian; Ryberg, Thomas; Sroga, Magdalena

    2010-01-01

    The main products of the project are innovative, active problem-based learning methodology for EA education and training, EA courses for university students and private and public sector employees, and an Enterprise Architecture competence ontology including a complete specification of skills...

  12. Project based learning in organizations: towards a methodology for learning in groups

    NARCIS (Netherlands)

    Poell, R.F.; Krogt, F.J. van der

    2003-01-01

    This article introduces a methodology for employees in organizations to set up and carry out their own group learning projects. It is argued that employees can use project-based learning to make their everyday learning more systematic at times, without necessarily formalizing it. The article

  13. Project-based learning in organizations : Towards a methodology for learning in groups

    NARCIS (Netherlands)

    Poell, R.F.; van der Krogt, F.J.

    2003-01-01

    This article introduces a methodology for employees in organizations to set up and carry out their own group learning projects. It is argued that employees can use project-based learning to make their everyday learning more systematic at times, without necessarily formalizing it. The article

  14. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  15. Teaching Research Methodology Using a Project-Based Three Course Sequence Critical Reflections on Practice

    Science.gov (United States)

    Braguglia, Kay H.; Jackson, Kanata A.

    2012-01-01

    This article presents a reflective analysis of teaching research methodology through a three course sequence using a project-based approach. The authors reflect critically on their experiences in teaching research methods courses in an undergraduate business management program. The introduction of a range of specific techniques including student…

  16. Measuring the Differences between Traditional Learning and Game-Based Learning Using Electroencephalography (EEG) Physiologically Based Methodology

    Science.gov (United States)

    Chen, Ching-Huei

    2017-01-01

    Students' cognitive states can reflect a learning experience that results in engagement in an activity. In this study, we used electroencephalography (EEG) physiologically based methodology to evaluate students' levels of attention and relaxation, as well as their learning performance within a traditional and game-based learning context. While no…

  17. Combining Project-Based Learning and Community-Based Research in a Research Methodology Course: The Lessons Learned

    Science.gov (United States)

    Arantes do Amaral, João Alberto; Lino dos Santos, Rebeca Júlia Rodrigues

    2018-01-01

    In this article, we present our findings regarding the course "Research Methodology," offered to 22 first-year undergraduate students studying Administration at the Federal University of São Paulo, Osasco, Brazil. The course, which combined community-based research and project-based learning, was developed during the second semester of…

  18. Hydrologic transport of depleted uranium associated with open air dynamic range testing at Los Alamos National Laboratory, New Mexico, and Eglin Air Force Base, Florida

    Energy Technology Data Exchange (ETDEWEB)

    Becker, N.M. [Los Alamos National Lab., NM (United States); Vanta, E.B. [Wright Laboratory Armament Directorate, Eglin Air Force Base, FL (United States)

    1995-05-01

    Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980`s at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments.

  19. Hydrologic transport of depleted uranium associated with open air dynamic range testing at Los Alamos National Laboratory, New Mexico, and Eglin Air Force Base, Florida

    International Nuclear Information System (INIS)

    Becker, N.M.; Vanta, E.B.

    1995-01-01

    Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980's at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments

  20. A pattern-based methodology for optimizing stitches in double-patterning technology

    Science.gov (United States)

    Wang, Lynn T.; Madhavan, Sriram; Dai, Vito; Capodieci, Luigi

    2015-03-01

    A pattern-based methodology for optimizing stitches is developed based on identifying stitch topologies and replacing them with pre-characterized fixing solutions in decomposed layouts. A topology-based library of stitches with predetermined fixing solutions is built. A pattern-based engine searches for matching topologies in the decomposed layouts. When a match is found, the engine opportunistically replaces the predetermined fixing solution: only a design rule check error-free replacement is preserved. The methodology is demonstrated on a 20nm layout design that contains over 67 million, first metal layer stitches. Results show that a small library containing 3 stitch topologies improves the stitch area regularity by 4x.

  1. Methodology to estimate parameters of an excitation system based on experimental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Saavedra-Montes, A.J. [Carrera 80 No 65-223, Bloque M8 oficina 113, Escuela de Mecatronica, Universidad Nacional de Colombia, Medellin (Colombia); Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Ramirez-Scarpetta, J.M. [Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Malik, O.P. [2500 University Drive N.W., Electrical and Computer Engineering Department, University of Calgary, Calgary, Alberta (Canada)

    2011-01-15

    A methodology to estimate the parameters of a potential-source controlled rectifier excitation system model is presented in this paper. The proposed parameter estimation methodology is based on the characteristics of the excitation system. A comparison of two pseudo random binary signals, two sampling periods for each one, and three estimation algorithms is also presented. Simulation results from an excitation control system model and experimental results from an excitation system of a power laboratory setup are obtained. To apply the proposed methodology, the excitation system parameters are identified at two different levels of the generator saturation curve. The results show that it is possible to estimate the parameters of the standard model of an excitation system, recording two signals and the system operating in closed loop with the generator. The normalized sum of squared error obtained with experimental data is below 10%, and with simulation data is below 5%. (author)

  2. Conjugate gradient based projection - A new explicit methodology for frictional contact

    Science.gov (United States)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  3. A methodology of SiP testing based on boundary scan

    Science.gov (United States)

    Qin, He; Quan, Haiyang; Han, Yifei; Zhu, Tianrui; Zheng, Tuo

    2017-10-01

    System in Package (SiP) play an important role in portable, aerospace and military electronic with the microminiaturization, light weight, high density, and high reliability. At present, SiP system test has encountered the problem on system complexity and malfunction location with the system scale exponentially increase. For SiP system, this paper proposed a testing methodology and testing process based on the boundary scan technology. Combining the character of SiP system and referencing the boundary scan theory of PCB circuit and embedded core test, the specific testing methodology and process has been proposed. The hardware requirement of the under test SiP system has been provided, and the hardware platform of the testing has been constructed. The testing methodology has the character of high test efficiency and accurate malfunction location.

  4. Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base

    Science.gov (United States)

    Mcruer, Duane T.; Myers, Thomas T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.

  5. INSTALLING AN ERP SYSTEM WITH A METHODOLOGY BASED ON THE PRINCIPLES OF GOAL DIRECTED PROJECT MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ioannis Zafeiropoulos

    2010-01-01

    Full Text Available This paper describes a generic methodology to support the process of modelling, adaptation and implementation (MAI of Enterprise Resource Planning Systems (ERPS based on the principles of goal directed project management (GDPM. The proposed methodology guides the project manager through specific stages in order to successfully complete the ERPS implementation. The development of the proper MAI methodology is deemed necessary because it will simplify the installation process of ERPS. The goal directed project management method was chosen since it provides a way of focusing all changes towards a predetermined goal. The main stages of the methodology are the promotion and preparation steps, the proposal, the contract, the implementation and the completion. The methodology was applied as a pilot application by a major ERPS development company. Important benefits were the easy and effective guidance for all installation and analysis stages, the faster installation for the ERPS and the control and cost reduction for the installation, in terms of time, manpower, technological equipment and other resources.

  6. Acceleration-based methodology to assess the blast mitigation performance of explosive ordnance disposal helmets

    Science.gov (United States)

    Dionne, J. P.; Levine, J.; Makris, A.

    2018-01-01

    To design the next generation of blast mitigation helmets that offer increasing levels of protection against explosive devices, manufacturers must be able to rely on appropriate test methodologies and human surrogates that will differentiate the performance level of various helmet solutions and ensure user safety. Ideally, such test methodologies and associated injury thresholds should be based on widely accepted injury criteria relevant within the context of blast. Unfortunately, even though significant research has taken place over the last decade in the area of blast neurotrauma, there currently exists no agreement in terms of injury mechanisms for blast-induced traumatic brain injury. In absence of such widely accepted test methods and injury criteria, the current study presents a specific blast test methodology focusing on explosive ordnance disposal protective equipment, involving the readily available Hybrid III mannequin, initially developed for the automotive industry. The unlikely applicability of the associated brain injury criteria (based on both linear and rotational head acceleration) is discussed in the context of blast. Test results encompassing a large number of blast configurations and personal protective equipment are presented, emphasizing the possibility to develop useful correlations between blast parameters, such as the scaled distance, and mannequin engineering measurements (head acceleration). Suggestions are put forward for a practical standardized blast testing methodology taking into account limitations in the applicability of acceleration-based injury criteria as well as the inherent variability in blast testing results.

  7. Design-Based Research: Is This a Suitable Methodology for Short-Term Projects?

    Science.gov (United States)

    Pool, Jessica; Laubscher, Dorothy

    2016-01-01

    This article reports on a design-based methodology of a thesis in which a fully face-to-face contact module was converted into a blended learning course. The purpose of the article is to report on how design-based phases, in the form of micro-, meso- and macro-cycles were applied to improve practice and to generate design principles. Design-based…

  8. Revisiting Antarctic Ozone Depletion

    Science.gov (United States)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  9. Satellite based radar interferometry to estimate large-scale soil water depletion from clay shrinkage: possibilities and limitations

    NARCIS (Netherlands)

    Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as

  10. Physics of fully depleted CCDs

    International Nuclear Information System (INIS)

    Holland, S E; Bebek, C J; Kolbe, W F; Lee, J S

    2014-01-01

    In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photo-generated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully depleted substrates arising from resistivity variations inherent to the growth of the high-resistivity silicon used to fabricate the CCDs

  11. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  12. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasselmann, S., E-mail: s.kasselmann@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Schitthelm, O. [Forschungszentrum Jülich, 52425 Jülich (Germany); Tantillo, F. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Scholthaus, S.; Rössel, C. [Forschungszentrum Jülich, 52425 Jülich (Germany); Allelein, H.-J. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany)

    2016-09-15

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains and therefore speeds up the calculation scheme. Highest priority has been given to the existence of a generic software interface well as an easy handling by making use of XML files for the user input. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach.

  13. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    International Nuclear Information System (INIS)

    Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.

    2014-01-01

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)

  14. Environmental restoration risk-based prioritization work package planning and risk ranking methodology. Revision 2

    International Nuclear Information System (INIS)

    Dail, J.L.; Nanstad, L.D.; White, R.K.

    1995-06-01

    This document presents the risk-based prioritization methodology developed to evaluate and rank Environmental Restoration (ER) work packages at the five US Department of Energy, Oak Ridge Field Office (DOE-ORO) sites [i.e., Oak Ridge K-25 Site (K-25), Portsmouth Gaseous Diffusion Plant (PORTS), Paducah Gaseous Diffusion Plant (PGDP), Oak Ridge National Laboratory (ORNL), and the Oak Ridge Y-12 Plant (Y-12)], the ER Off-site Program, and Central ER. This prioritization methodology was developed to support the increased rigor and formality of work planning in the overall conduct of operations within the DOE-ORO ER Program. Prioritization is conducted as an integral component of the fiscal ER funding cycle to establish program budget priorities. The purpose of the ER risk-based prioritization methodology is to provide ER management with the tools and processes needed to evaluate, compare, prioritize, and justify fiscal budget decisions for a diverse set of remedial action, decontamination and decommissioning, and waste management activities. The methodology provides the ER Program with a framework for (1) organizing information about identified DOE-ORO environmental problems, (2) generating qualitative assessments of the long- and short-term risks posed by DOE-ORO environmental problems, and (3) evaluating the benefits associated with candidate work packages designed to reduce those risks. Prioritization is conducted to rank ER work packages on the basis of the overall value (e.g., risk reduction, stakeholder confidence) each package provides to the ER Program. Application of the methodology yields individual work package ''scores'' and rankings that are used to develop fiscal budget requests. This document presents the technical basis for the decision support tools and process

  15. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  16. A methodological approach for designing a usable ontology-based GUI in healthcare.

    Science.gov (United States)

    Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J

    2013-01-01

    This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.

  17. Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies

    International Nuclear Information System (INIS)

    White, M.K.; Buelt, J.L.; Stottlemyre, J.A.

    1991-02-01

    Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets of compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab

  18. Motivating Students for Project-based Learning for Application of Research Methodology Skills.

    Science.gov (United States)

    Tiwari, Ranjana; Arya, Raj Kumar; Bansal, Manoj

    2017-12-01

    Project-based learning (PBL) is motivational for students to learn research methodology skills. It is a way to engage and give them ownership over their own learning. The aim of this study is to use PBL for application of research methodology skills for better learning by encouraging an all-inclusive approach in teaching and learning rather than an individualized tailored approach. The present study was carried out for MBBS 6 th - and 7 th -semester students of community medicine. Students and faculties were sensitized about PBL and components of research methodology skills. They worked in small groups. The students were asked to fill the student feedback Questionnaire and the faculty was also asked to fill the faculty feedback Questionnaire. Both the Questionnaires were assessed on a 5 point Likert scale. After submitted projects, document analysis was done. A total of 99 students of the 6 th and 7 th semester were participated in PBL. About 90.91% students agreed that there should be continuation of PBL in subsequent batches. 73.74% felt satisfied and motivated with PBL, whereas 76.77% felt that they would be able to use research methodology in the near future. PBL requires considerable knowledge, effort, persistence, and self-regulation on the part of the students. They need to devise plans, gather information evaluate both the findings, and their approach. Facilitator plays a critical role in helping students in the process by shaping opportunity for learning, guiding students, thinking, and helping them construct new understanding.

  19. Learning Theory Bases of Communicative Methodology and the Notional/Functional Syllabus

    OpenAIRE

    Jacqueline D., Beebe

    1992-01-01

    This paper examines the learning theories that underlie the philosophy and practices known as communicative language teaching methodology. These theories are identified first as a reaction against the behavioristic learning theory of audiolingualism. Approaches to syllabus design based on both the "weak" version of communicative language teaching-learning to use the second language-and the "strong" version-using the second language to learn it-are examined. The application of cognitive theory...

  20. Contextual System of Symbol Structural Recognition based on an Object-Process Methodology

    OpenAIRE

    Delalandre, Mathieu

    2005-01-01

    We present in this paper a symbol recognition system for the graphic documents. This one is based on a contextual approach for symbol structural recognition exploiting an Object-Process Methodology. It uses a processing library composed of structural recognition processings and contextual evaluation processings. These processings allow our system to deal with the multi-representation of symbols. The different processings are controlled, in an automatic way, by an inference engine during the r...

  1. Potential-based methodology for active sound control in three dimensional settings

    OpenAIRE

    Lim, H.; Utyuzhnikov, S V; Lam, Y.W.; Kelly, L.

    2014-01-01

    This paper extends a potential-based approach to active noise shielding with preservation of wanted sound in three-dimensional settings. The approach, which was described in a previous publication [Lim et al., J. Acoust. Soc. Am. 129(2), 717–725 (2011)], provides several significant advantages over conventional noise control methods. Most significantly, the methodology does not require any information including the characterization of sources, impedance boundary conditions and surrounding med...

  2. Methodological Approach to Company Cash Flows Target-Oriented Forecasting Based on Financial Position Analysis

    OpenAIRE

    Sergey Krylov

    2012-01-01

    The article treats a new methodological approach to the company cash flows target-oriented forecasting based on its financial position analysis. The approach is featured to be universal and presumes application of the following techniques developed by the author: financial ratio values correction techniques and correcting cash flows techniques. The financial ratio values correction technique assumes to analyze and forecast company financial position while the correcting cash flows technique i...

  3. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  4. Hidden shift of the ionome of plants exposed to elevated CO₂depletes minerals at the base of human nutrition.

    Science.gov (United States)

    Loladze, Irakli

    2014-05-07

    Mineral malnutrition stemming from undiversified plant-based diets is a top global challenge. In C3 plants (e.g., rice, wheat), elevated concentrations of atmospheric carbon dioxide (eCO2) reduce protein and nitrogen concentrations, and can increase the total non-structural carbohydrates (TNC; mainly starch, sugars). However, contradictory findings have obscured the effect of eCO2 on the ionome-the mineral and trace-element composition-of plants. Consequently, CO2-induced shifts in plant quality have been ignored in the estimation of the impact of global change on humans. This study shows that eCO2 reduces the overall mineral concentrations (-8%, 95% confidence interval: -9.1 to -6.9, p carbon:minerals in C3 plants. The meta-analysis of 7761 observations, including 2264 observations at state of the art FACE centers, covers 130 species/cultivars. The attained statistical power reveals that the shift is systemic and global. Its potential to exacerbate the prevalence of 'hidden hunger' and obesity is discussed.DOI: http://dx.doi.org/10.7554/eLife.02245.001. Copyright © 2014, Loladze.

  5. Hidden shift of the ionome of plants exposed to elevated CO2 depletes minerals at the base of human nutrition

    Science.gov (United States)

    Loladze, Irakli

    2014-01-01

    Mineral malnutrition stemming from undiversified plant-based diets is a top global challenge. In C3 plants (e.g., rice, wheat), elevated concentrations of atmospheric carbon dioxide (eCO2) reduce protein and nitrogen concentrations, and can increase the total non-structural carbohydrates (TNC; mainly starch, sugars). However, contradictory findings have obscured the effect of eCO2 on the ionome—the mineral and trace-element composition—of plants. Consequently, CO2-induced shifts in plant quality have been ignored in the estimation of the impact of global change on humans. This study shows that eCO2 reduces the overall mineral concentrations (−8%, 95% confidence interval: −9.1 to −6.9, p carbon:minerals in C3 plants. The meta-analysis of 7761 observations, including 2264 observations at state of the art FACE centers, covers 130 species/cultivars. The attained statistical power reveals that the shift is systemic and global. Its potential to exacerbate the prevalence of ‘hidden hunger’ and obesity is discussed. DOI: http://dx.doi.org/10.7554/eLife.02245.001 PMID:24867639

  6. Standardization of formulations for the acute amino acid depletion and loading tests.

    Science.gov (United States)

    Badawy, Abdulla A-B; Dougherty, Donald M

    2015-04-01

    The acute tryptophan depletion and loading and the acute tyrosine plus phenylalanine depletion tests are powerful tools for studying the roles of cerebral monoamines in behaviour and symptoms related to various disorders. The tests use either amino acid mixtures or proteins. Current amino acid mixtures lack specificity in humans, but not in rodents, because of the faster disposal of branched-chain amino acids (BCAAs) by the latter. The high content of BCAA (30-60%) is responsible for the poor specificity in humans and we recommend, in a 50g dose, a control formulation with a lowered BCAA content (18%) as a common control for the above tests. With protein-based formulations, α-lactalbumin is specific for acute tryptophan loading, whereas gelatine is only partially effective for acute tryptophan depletion. We recommend the use of the whey protein fraction glycomacropeptide as an alternative protein. Its BCAA content is ideal for specificity and the absence of tryptophan, tyrosine and phenylalanine render it suitable as a template for seven formulations (separate and combined depletion or loading and a truly balanced control). We invite the research community to participate in standardization of the depletion and loading methodologies by using our recommended amino acid formulation and developing those based on glycomacropeptide. © The Author(s) 2015.

  7. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  8. Damage detection methodology on beam-like structures based on combined modal Wavelet Transform strategy

    Science.gov (United States)

    Serra, Roger; Lopez, Lautaro

    2018-05-01

    Different approaches on the detection of damages based on dynamic measurement of structures have appeared in the last decades. They were based, amongst others, on changes in natural frequencies, modal curvatures, strain energy or flexibility. Wavelet analysis has also been used to detect the abnormalities on modal shapes induced by damages. However the majority of previous work was made with non-corrupted by noise signals. Moreover, the damage influence for each mode shape was studied separately. This paper proposes a new methodology based on combined modal wavelet transform strategy to cope with noisy signals, while at the same time, able to extract the relevant information from each mode shape. The proposed methodology will be then compared with the most frequently used and wide-studied methods from the bibliography. To evaluate the performance of each method, their capacity to detect and localize damage will be analyzed in different cases. The comparison will be done by simulating the oscillations of a cantilever steel beam with and without defect as a numerical case. The proposed methodology proved to outperform classical methods in terms of noisy signals.

  9. A deviation based assessment methodology for multiple machine health patterns classification and fault detection

    Science.gov (United States)

    Jia, Xiaodong; Jin, Chao; Buzza, Matt; Di, Yuan; Siegel, David; Lee, Jay

    2018-01-01

    Successful applications of Diffusion Map (DM) in machine failure detection and diagnosis have been reported in several recent studies. DM provides an efficient way to visualize the high-dimensional, complex and nonlinear machine data, and thus suggests more knowledge about the machine under monitoring. In this paper, a DM based methodology named as DM-EVD is proposed for machine degradation assessment, abnormality detection and diagnosis in an online fashion. Several limitations and challenges of using DM for machine health monitoring have been analyzed and addressed. Based on the proposed DM-EVD, a deviation based methodology is then proposed to include more dimension reduction methods. In this work, the incorporation of Laplacian Eigen-map and Principal Component Analysis (PCA) are explored, and the latter algorithm is named as PCA-Dev and is validated in the case study. To show the successful application of the proposed methodology, case studies from diverse fields are presented and investigated in this work. Improved results are reported by benchmarking with other machine learning algorithms.

  10. A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.

    Science.gov (United States)

    Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing

    2015-01-01

    This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.

  11. AN INDUCTIVE, INTERACTIVE AND ADAPTIVE HYBRID PROBLEM-BASED LEARNING METHODOLOGY: APPLICATION TO STATISTICS

    Directory of Open Access Journals (Sweden)

    ADA ZHENG

    2011-10-01

    Full Text Available We have developed an innovative hybrid problem-based learning (PBL methodology. The methodology has the following distinctive features: i Each complex question was decomposed into a set of coherent finer subquestions by following the carefully designed criteria to maintain a delicate balance between guiding the students and inspiring them to think independently. This learning methodology enabled the students to solve the complex questions progressively in an inductive context. ii Facilitated by the utilization of our web-based learning systems, the teacher was able to interact with the students intensively and could allocate more teaching time to provide tailor-made feedback for individual student. The students were actively engaged in the learning activities, stimulated by the intensive interaction. iii The answers submitted by the students could be automatically consolidated in the report of the Moodle system in real-time. The teacher could adjust the teaching schedule and focus of the class to adapt to the learning progress of the students by analysing the automatically generated report and log files of the web-based learning system. As a result, the attendance rate of the students increased from about 50% to more than 90%, and the students’ learning motivation have been significantly enhanced.

  12. Depleted uranium: A DOE management guide

    International Nuclear Information System (INIS)

    1995-10-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks

  13. Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques

    Science.gov (United States)

    2017-11-01

    on Bio -Inspired Optimization Techniques by Canh Ly, Nghia Tran, and Ozlem Kilic Approved for public release; distribution is...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques 5a. CONTRACT NUMBER

  14. Capital expenditure and depletion

    International Nuclear Information System (INIS)

    Rech, O.; Saniere, A.

    2003-01-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  15. Capital expenditure and depletion

    Energy Technology Data Exchange (ETDEWEB)

    Rech, O.; Saniere, A

    2003-07-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  16. The development of a neuroscience-based methodology for the nuclear energy learning/teaching process

    International Nuclear Information System (INIS)

    Barabas, Roberta de C.; Sabundjian, Gaiane

    2015-01-01

    When compared to other energy sources such as fossil fuels, coal, oil, and gas, nuclear energy has perhaps the lowest impact on the environment. Moreover, nuclear energy has also benefited other fields such as medicine, pharmaceutical industry, and agriculture, among others. However, despite all benefits that result from the peaceful uses of nuclear energy, the theme is still addressed with prejudice. Education may be the starting point for public acceptance of nuclear energy as it provides pedagogical approaches, learning environments, and human resources, which are essential conditions for effective learning. So far nuclear energy educational researches have been conducted using only conventional assessment methods. The global educational scenario has demonstrated absence of neuroscience-based methods for the teaching of nuclear energy, and that may be an opportunity for developing new strategic teaching methods that will help demystifying the theme consequently improving public acceptance of this type of energy. This work aims to present the first step of a methodology in progress based on researches in neuroscience to be applied to Brazilian science teachers in order to contribute to an effective teaching/learning process. This research will use the Implicit Association Test (IAT) to verify implicit attitudes of science teachers concerning nuclear energy. Results will provide data for the next steps of the research. The literature has not reported a similar neuroscience-based methodology applied to the nuclear energy learning/teaching process; therefore, this has demonstrated to be an innovating methodology. The development of the methodology is in progress and the results will be presented in future works. (author)

  17. The development of a neuroscience-based methodology for the nuclear energy learning/teaching process

    Energy Technology Data Exchange (ETDEWEB)

    Barabas, Roberta de C.; Sabundjian, Gaiane, E-mail: robertabarabas@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    When compared to other energy sources such as fossil fuels, coal, oil, and gas, nuclear energy has perhaps the lowest impact on the environment. Moreover, nuclear energy has also benefited other fields such as medicine, pharmaceutical industry, and agriculture, among others. However, despite all benefits that result from the peaceful uses of nuclear energy, the theme is still addressed with prejudice. Education may be the starting point for public acceptance of nuclear energy as it provides pedagogical approaches, learning environments, and human resources, which are essential conditions for effective learning. So far nuclear energy educational researches have been conducted using only conventional assessment methods. The global educational scenario has demonstrated absence of neuroscience-based methods for the teaching of nuclear energy, and that may be an opportunity for developing new strategic teaching methods that will help demystifying the theme consequently improving public acceptance of this type of energy. This work aims to present the first step of a methodology in progress based on researches in neuroscience to be applied to Brazilian science teachers in order to contribute to an effective teaching/learning process. This research will use the Implicit Association Test (IAT) to verify implicit attitudes of science teachers concerning nuclear energy. Results will provide data for the next steps of the research. The literature has not reported a similar neuroscience-based methodology applied to the nuclear energy learning/teaching process; therefore, this has demonstrated to be an innovating methodology. The development of the methodology is in progress and the results will be presented in future works. (author)

  18. Fully Depleted Charge-Coupled Devices

    International Nuclear Information System (INIS)

    Holland, Stephen E.

    2006-01-01

    We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications

  19. Validation of a Methodology to Predict Micro-Vibrations Based on Finite Element Model Approach

    Science.gov (United States)

    Soula, Laurent; Rathband, Ian; Laduree, Gregory

    2014-06-01

    This paper presents the second part of the ESA R&D study called "METhodology for Analysis of structure- borne MICro-vibrations" (METAMIC). After defining an integrated analysis and test methodology to help predicting micro-vibrations [1], a full-scale validation test campaign has been carried out. It is based on a bread-board representative of typical spacecraft (S/C) platform consisting in a versatile structure made of aluminium sandwich panels equipped with different disturbance sources and a dummy payload made of a silicon carbide (SiC) bench. The bread-board has been instrumented with a large set of sensitive accelerometers and tests have been performed including back-ground noise measurement, modal characterization and micro- vibration tests. The results provided responses to the perturbation coming from a reaction wheel or cryo-cooler compressors, operated independently then simultaneously with different operation modes. Using consistent modelling and associated experimental characterization techniques, a correlation status has been assessed by comparing test results with predictions based on FEM approach. Very good results have been achieved particularly for the case of a wheel in sweeping rate operation with test results over-predicted within a reasonable margin lower than two. Some limitations of the methodology have also been identified for sources operating at a fixed rate or coming with a small number of dominant harmonics and recommendations have been issued in order to deal with model uncertainties and stay conservative.

  20. Effectiveness of the management of price risk methodologies for the corn market based on trading signals

    Directory of Open Access Journals (Sweden)

    W. Rossouw

    2013-03-01

    Full Text Available Corn production is scattered geographically over various continents, but most of it is grown in the United States. As such, the world price of corn futures contracts is largely dominated by North American corn prices as traded on the Chicago Board of Trade. In recent years, this market has been characterised by an increase in price volatility and magnitude of price movement as a result of decreasing stock levels. The development and implementation of an effective and successful derivative price risk management strategy based on the Chicago Board of Trade corn futures contract will therefore be of inestimable value to market stakeholders worldwide. The research focused on the efficient market hypothesis and the possibility of contesting this phenomenon through an application of a derivative price risk management methodology. The methodology is based on a combination of an analysis of market trends and technical oscillators with the objective of generating returns superior to that of a market benchmark. The study found that market participants are currently unable to exploit price movement in a manner which results in returns that contest the notion of efficient markets. The methodology proposed, however, does allow the user to consistently achieve returns superior to that of a predetermined market benchmark. The benchmark price for the purposes of this study was the average price offered by the market over the contract lifetime, and as such, the efficient market hypothesis was successfully contested

  1. A top-down design methodology and its implementation for VCSEL-based optical links design

    Science.gov (United States)

    Li, Jiguang; Cao, Mingcui; Cai, Zilong

    2005-01-01

    In order to find the optimal design for a given specification of an optical communication link, an integrated simulation of electronic, optoelectronic, and optical components of a complete system is required. It is very important to be able to simulate at both system level and detailed model level. This kind of model is feasible due to the high potential of Verilog-AMS language. In this paper, we propose an effective top-down design methodology and employ it in the development of a complete VCSEL-based optical links simulation. The principle of top-down methodology is that the development would proceed from the system to device level. To design a hierarchical model for VCSEL based optical links, the design framework is organized in three levels of hierarchy. The models are developed, and implemented in Verilog-AMS. Therefore, the model parameters are fitted to measured data. A sample transient simulation demonstrates the functioning of our implementation. Suggestions for future directions in top-down methodology used for optoelectronic systems technology are also presented.

  2. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    Energy Technology Data Exchange (ETDEWEB)

    Vismari, Lucio Flavio, E-mail: lucio.vismari@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil); Batista Camargo Junior, Joao, E-mail: joaocamargo@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil)

    2011-07-15

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  3. A methodology for collection and analysis of human error data based on a cognitive model: IDA

    International Nuclear Information System (INIS)

    Shen, S.-H.; Smidts, C.; Mosleh, A.

    1997-01-01

    This paper presents a model-based human error taxonomy and data collection. The underlying model, IDA (described in two companion papers), is a cognitive model of behavior developed for analysis of the actions of nuclear power plant operating crew during abnormal situations. The taxonomy is established with reference to three external reference points (i.e. plant status, procedures, and crew) and four reference points internal to the model (i.e. information collected, diagnosis, decision, action). The taxonomy helps the analyst: (1) recognize errors as such; (2) categorize the error in terms of generic characteristics such as 'error in selection of problem solving strategies' and (3) identify the root causes of the error. The data collection methodology is summarized in post event operator interview and analysis summary forms. The root cause analysis methodology is illustrated using a subset of an actual event. Statistics, which extract generic characteristics of error prone behaviors and error prone situations are presented. Finally, applications of the human error data collection are reviewed. A primary benefit of this methodology is to define better symptom-based and other auxiliary procedures with associated training to minimize or preclude certain human errors. It also helps in design of control rooms, and in assessment of human error probabilities in the probabilistic risk assessment framework. (orig.)

  4. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    International Nuclear Information System (INIS)

    Vismari, Lucio Flavio; Batista Camargo Junior, Joao

    2011-01-01

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  5. A case study for INPRO methodology based on Indian advanced heavy water reactor

    International Nuclear Information System (INIS)

    Anantharaman, K.; Saha, D.; Sinha, R.K.

    2004-01-01

    Under Phase 1A of the International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO) a methodology (INPRO methodology) has been developed which can be used to evaluate a given energy system or a component of such a system on a national and/or global basis. The INPRO study can be used for assessing the potential of the innovative reactor in terms of economics, sustainability and environment, safety, waste management, proliferation resistance and cross cutting issues. India, a participant in INPRO program, is engaged in a case study applying INPRO methodology based on Advanced Heavy Water Reactor (AHWR). AHWR is a 300 MWe, boiling light water cooled, heavy water moderated and vertical pressure tube type reactor. Thorium utilization is very essential for Indian nuclear power program considering the indigenous resource availability. The AHWR is designed to produce most of its power from thorium, aided by a small input of plutonium-based fuel. The features of AHWR are described in the paper. The case study covers the fuel cycle, to be followed in the near future, for AHWR. The paper deals with initial observations of the case study with regard to fuel cycle issues. (authors)

  6. High-resolution three-dimensional imaging of a depleted CMOS sensor using an edge Transient Current Technique based on the Two Photon Absorption process (TPA-eTCT)

    CERN Document Server

    García, Marcos Fernández; Echeverría, Richard Jaramillo; Moll, Michael; Santos, Raúl Montero; Moya, David; Pinto, Rogelio Palomo; Vila, Iván

    2016-01-01

    For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.

  7. High-resolution three-dimensional imaging of a depleted CMOS sensor using an edge Transient Current Technique based on the Two Photon Absorption process (TPA-eTCT)

    Energy Technology Data Exchange (ETDEWEB)

    García, Marcos Fernández; Sánchez, Javier González; Echeverría, Richard Jaramillo [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Moll, Michael [CERN, Organisation europénne pour la recherche nucléaire, CH-1211 Genéve 23 (Switzerland); Santos, Raúl Montero [SGIker Laser Facility, UPV/EHU, Sarriena, s/n - 48940 Leioa-Bizkaia (Spain); Moya, David [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Pinto, Rogelio Palomo [Departamento de Ingeniería Electrónica, Escuela Superior de Ingenieros Universidad de Sevilla (Spain); Vila, Iván [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain)

    2017-02-11

    For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.

  8. Ozone-depleting Substances (ODS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...

  9. Attribute Based Selection of Thermoplastic Resin for Vacuum Infusion Process: A Decision Making Methodology

    DEFF Research Database (Denmark)

    Raghavalu Thirumalai, Durai Prabhakaran; Lystrup, Aage; Løgstrup Andersen, Tom

    2012-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...... for vacuum infused of a wind turbine blade—is shown to demonstrate the intricacies involved in the proposed methodology for resin selection....

  10. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    ) weighted-least-square regression. 3) Initialization of estimation by use of linear algebra providing a first guess. 4) Sequential parameter and simultaneous GC parameter by using of 4 different minimization algorithms. 5) Thorough uncertainty analysis: a) based on asymptotic approximation of parameter...... covariance matrix b) based on boot strap method. Providing 95%-confidence intervals of parameters and predicted property. 6) Performance statistics analysis and model application. The application of the methodology is shown for a new GC model built to predict lower flammability limit (LFL) for refrigerants...... their credibility and robustness in wider industrial and scientific applications....

  11. Performance Assessment of the Wave Dragon Wave Energy Converter Based on the EquiMar Methodology

    DEFF Research Database (Denmark)

    Parmeggiani, Stefano; Chozas, Julia Fernandez; Pecher, Arthur

    2011-01-01

    At the present pre-commercial phase of the wave energy sector, device developers are called to provide reliable estimates on power performance and production at possible deployment locations. The EU EquiMar project has proposed a novel approach, where the performance assessment is based mainly...... on experimental data deriving from sea trials rather than solely on numerical predictions. The study applies this methodology to evaluate the performance of Wave Dragon at two locations in the North Sea, based on the data acquired during the sea trials of a 1:4.5 scale prototype. Indications about power...

  12. Consequences of biome depletion

    International Nuclear Information System (INIS)

    Salvucci, Emiliano

    2013-01-01

    The human microbiome is an integral part of the superorganism together with their host and they have co-evolved since the early days of the existence of the human species. The modification of the microbiome as a result changes in food and social habits of human beings throughout their life history has led to the emergence of many diseases. In contrast with the Darwinian view of nature of selfishness and competence, new holistic approaches are rising. Under these views, the reconstitution of the microbiome comes out as a fundamental therapy for emerging diseases related to biome depletion.

  13. A SystemC-Based Design Methodology for Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Christian Haubelt

    2007-03-01

    Full Text Available Digital signal processing algorithms are of big importance in many embedded systems. Due to complexity reasons and due to the restrictions imposed on the implementations, new design methodologies are needed. In this paper, we present a SystemC-based solution supporting automatic design space exploration, automatic performance evaluation, as well as automatic system generation for mixed hardware/software solutions mapped onto FPGA-based platforms. Our proposed hardware/software codesign approach is based on a SystemC-based library called SysteMoC that permits the expression of different models of computation well known in the domain of digital signal processing. It combines the advantages of executability and analyzability of many important models of computation that can be expressed in SysteMoC. We will use the example of an MPEG-4 decoder throughout this paper to introduce our novel methodology. Results from a five-dimensional design space exploration and from automatically mapping parts of the MPEG-4 decoder onto a Xilinx FPGA platform will demonstrate the effectiveness of our approach.

  14. A Duration Prediction Using a Material-Based Progress Management Methodology for Construction Operation Plans

    Directory of Open Access Journals (Sweden)

    Yongho Ko

    2017-04-01

    Full Text Available Precise and accurate prediction models for duration and cost enable contractors to improve their decision making for effective resource management in terms of sustainability in construction. Previous studies have been limited to cost-based estimations, but this study focuses on a material-based progress management method. Cost-based estimations typically used in construction, such as the earned value method, rely on comparing the planned budget with the actual cost. However, accurately planning budgets requires analysis of many factors, such as the financial status of the sectors involved. Furthermore, there is a higher possibility of changes in the budget than in the total amount of material used during construction, which is deduced from the quantity take-off from drawings and specifications. Accordingly, this study proposes a material-based progress management methodology, which was developed using different predictive analysis models (regression, neural network, and auto-regressive moving average as well as datasets on material and labor, which can be extracted from daily work reports from contractors. A case study on actual datasets was conducted, and the results show that the proposed methodology can be efficiently used for progress management in construction.

  15. Computerized methodology for micro-CT and histological data inflation using an IVUS based translation map.

    Science.gov (United States)

    Athanasiou, Lambros S; Rigas, George A; Sakellarios, Antonis I; Exarchos, Themis P; Siogkas, Panagiotis K; Naka, Katerina K; Panetta, Daniele; Pelosi, Gualtiero; Vozzi, Federico; Michalis, Lampros K; Parodi, Oberdan; Fotiadis, Dimitrios I

    2015-10-01

    A framework for the inflation of micro-CT and histology data using intravascular ultrasound (IVUS) images, is presented. The proposed methodology consists of three steps. In the first step the micro-CT/histological images are manually co-registered with IVUS by experts using fiducial points as landmarks. In the second step the lumen of both the micro-CT/histological images and IVUS images are automatically segmented. Finally, in the third step the micro-CT/histological images are inflated by applying a transformation method on each image. The transformation method is based on the IVUS and micro-CT/histological contour difference. In order to validate the proposed image inflation methodology, plaque areas in the inflated micro-CT and histological images are compared with the ones in the IVUS images. The proposed methodology for inflating micro-CT/histological images increases the sensitivity of plaque area matching between the inflated and the IVUS images (7% and 22% in histological and micro-CT images, respectively). Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. FPGA hardware acceleration for high performance neutron transport computation based on agent methodology - 318

    International Nuclear Information System (INIS)

    Shanjie, Xiao; Tatjana, Jevremovic

    2010-01-01

    The accurate, detailed and 3D neutron transport analysis for Gen-IV reactors is still time-consuming regardless of advanced computational hardware available in developed countries. This paper introduces a new concept in addressing the computational time while persevering the detailed and accurate modeling; a specifically designed FPGA co-processor accelerates robust AGENT methodology for complex reactor geometries. For the first time this approach is applied to accelerate the neutronics analysis. The AGENT methodology solves neutron transport equation using the method of characteristics. The AGENT methodology performance was carefully analyzed before the hardware design based on the FPGA co-processor was adopted. The most time-consuming kernel part is then transplanted into the FPGA co-processor. The FPGA co-processor is designed with data flow-driven non von-Neumann architecture and has much higher efficiency than the conventional computer architecture. Details of the FPGA co-processor design are introduced and the design is benchmarked using two different examples. The advanced chip architecture helps the FPGA co-processor obtaining more than 20 times speed up with its working frequency much lower than the CPU frequency. (authors)

  17. Critical dialogical approach: A methodological direction for occupation-based social transformative work.

    Science.gov (United States)

    Farias, Lisette; Laliberte Rudman, Debbie; Pollard, Nick; Schiller, Sandra; Serrata Malfitano, Ana Paula; Thomas, Kerry; van Bruggen, Hanneke

    2018-05-03

    Calls for embracing the potential and responsibility of occupational therapy to address socio-political conditions that perpetuate occupational injustices have materialized in the literature. However, to reach beyond traditional frameworks informing practices, this social agenda requires the incorporation of diverse epistemological and methodological approaches to support action commensurate with social transformative goals. Our intent is to present a methodological approach that can help extend the ways of thinking or frameworks used in occupational therapy and science to support the ongoing development of practices with and for individuals and collectives affected by marginalizing conditions. We describe the epistemological and theoretical underpinnings of a methodological approach drawing on Freire and Bakhtin's work. Integrating our shared experience taking part in an example study, we discuss the unique advantages of co-generating data using two methods aligned with this approach; dialogical interviews and critical reflexivity. Key considerations when employing this approach are presented, based on its proposed epistemological and theoretical stance and our shared experiences engaging in it. A critical dialogical approach offers one way forward in expanding occupational therapy and science scholarship by promoting collaborative knowledge generation and examination of taken-for-granted understandings that shape individuals assumptions and actions.

  18. A Novel Water Supply Network Sectorization Methodology Based on a Complete Economic Analysis, Including Uncertainties

    Directory of Open Access Journals (Sweden)

    Enrique Campbell

    2016-04-01

    Full Text Available The core idea behind sectorization of Water Supply Networks (WSNs is to establish areas partially isolated from the rest of the network to improve operational control. Besides the benefits associated with sectorization, some drawbacks must be taken into consideration by water operators: the economic investment associated with both boundary valves and flowmeters and the reduction of both pressure and system resilience. The target of sectorization is to properly balance these negative and positive aspects. Sectorization methodologies addressing the economic aspects mainly consider costs of valves and flowmeters and of energy, and the benefits in terms of water saving linked to pressure reduction. However, sectorization entails other benefits, such as the reduction of domestic consumption, the reduction of burst frequency and the enhanced capacity to detect and intervene over future leakage events. We implement a development proposed by the International Water Association (IWA to estimate the aforementioned benefits. Such a development is integrated in a novel sectorization methodology based on a social network community detection algorithm, combined with a genetic algorithm optimization method and Monte Carlo simulation. The methodology is implemented over a fraction of the WSN of Managua city, capital of Nicaragua, generating a net benefit of 25,572 $/year.

  19. Methodology of the National School-based Health Survey in Malaysia, 2012.

    Science.gov (United States)

    Yusoff, Fadhli; Saari, Riyanti; Naidu, Balkish M; Ahmad, Noor Ani; Omar, Azahadi; Aris, Tahir

    2014-09-01

    The National School-Based Health Survey 2012 was a nationwide school health survey of students in Standard 4 to Form 5 (10-17 years of age), who were schooling in government schools in Malaysia during the period of data collection. The survey comprised 3 subsurveys: the Global School Health Survey (GSHS), the Mental Health Survey, and the National School-Based Nutrition Survey. The aim of the survey was to provide data on the health status of adolescents in Malaysia toward strengthening the adolescent health program in the country. The design of the survey was created to fulfill the requirements of the 3 subsurveys. A 2-stage stratified sampling method was adopted in the sampling. The methods for data collection were via questionnaire and physical examination. The National School-Based Health Survey 2012 adopted an appropriate methodology for a school-based survey to ensure valid and reliable findings. © 2014 APJPH.

  20. A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.

    Science.gov (United States)

    Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-10-08

    Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting

  1. Methodology for cost analysis of film-based and filmless portable chest systems

    Science.gov (United States)

    Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.

    1996-05-01

    Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.

  2. Ozone depleting substances management inventory system

    Directory of Open Access Journals (Sweden)

    Felix Ivan Romero Rodríguez

    2018-02-01

    Full Text Available Context: The care of the ozone layer is an activity that contributes to the planet's environmental stability. For this reason, the Montreal Protocol is created to control the emission of substances that deplete the ozone layer and reduce its production from an organizational point of view. However, it is also necessary to have control of those that are already circulating and those present in the equipment that cannot be replaced yet because of the context of the companies that keep it. Generally, the control mechanisms for classifying the type of substances, equipment and companies that own them, are carried in physical files, spreadsheets and text documents, which makes it difficult to control and manage the data stored in them. Method: The objective of this research is to computerize the process of control of substances that deplete the ozone layer. An evaluation and description of all process to manage Ozone-Depleting Substances (ODS, and its alternatives, is done. For computerization, the agile development methodology SCRUM is used, and for the technological solution tools and free open source technologies are used. Result: As a result of the research, a computer tool was developed that automates the process of control and management of substances that exhaust the ozone layer and its alternatives. Conclusions: The developed computer tool allows to control and manage the ozone-depleting substances and the equipment that use them. It also manages the substances that arise as alternatives to be used for the protection of the ozone layer.

  3. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  4. An inexpensive, interdisciplinary, methodology to conduct an impact study of homeless persons on hospital based services.

    Science.gov (United States)

    Parker, R David; Regier, Michael; Brown, Zachary; Davis, Stephen

    2015-02-01

    Homelessness is a primary concern for community health. Scientific literature on homelessness is wide ranging and diverse. One opportunity to add to existing literature is the development and testing of affordable, easily implemented methods for measuring the impact of homeless on the healthcare system. Such methodological approaches rely on the strengths in a multidisciplinary approach, including providers, both healthcare and homeless services and applied clinical researchers. This paper is a proof of concept for a methodology which is easily adaptable nationwide, given the mandated implementation of homeless management information systems in the United States and other countries; medical billing systems by hospitals; and research methods of researchers. Adaptation is independent of geographic region, budget restraints, specific agency skill sets, and many other factors that impact the application of a consistent methodological science based approach to assess and address homelessness. We conducted a secondary data analysis merging data from homeless utilization and hospital case based data. These data detailed care utilization among homeless persons in a small, Appalachian city in the United States. In our sample of 269 persons who received at least one hospital based service and one homeless service between July 1, 2012 and June 30, 2013, the total billed costs were $5,979,463 with 10 people costing more than one-third ($1,957,469) of the total. Those persons were primarily men, living in an emergency shelter, with pre-existing disabling conditions. We theorize that targeted services, including Housing First, would be an effective intervention. This is proposed in a future study.

  5. The API methodology for risk-based inspection (RBI) analysis for the petroleum and petrochemical industry

    International Nuclear Information System (INIS)

    Reynolds, J.T.

    1998-01-01

    Twenty-one petroleum and petrochemical companies are currently sponsoring a project within the American Petroleum Institute (API) to develop risk-based inspection (RBI) methodology for application in the refining and petrochemical industry. This paper describes that particular RBI methodology and provides a summary of the three levels of RBI analysis developed by the project. Also included is a review of the first pilot project to validate the methodology by applying RBI to several existing refining units. The failure for pressure equipment in a process unit can have several undesirable effects. For the purpose of RBI analysis, the API RBI program categorizes these effects into four basic risk outcomes: flammable events, toxic releases, major environmental damage, and business interruption losses. API RBI is a strategic process, both qualitative and quantitative, for understanding and reducing these risks associated with operating pressure equipment. This paper will show how API RBI assesses the potential consequences of a failure of the pressure boundary, as well as assessing the likelihood (probability) of failure. Risk-based inspection also prioritizes risk levels in a systematic manner so that the owner-user can then plan an inspection program that focuses more resources on the higher risk equipment; while possibly saving inspection resources that are not doing an effective job of reducing risk. At the same time, if consequence of failure is a significant driving force for high risk equipment items, plant management also has the option of applying consequence mitigation steps to minimize the impact of a hazardous release, should one occur. The target audience for this paper is engineers, inspectors, and managers who want to understand what API Risk-Based Inspection is all about, what are the benefits and limitations of RBI, and how inspection practices can be changed to reduce risks and/or save costs without impacting safety risk. (Author)

  6. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  7. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    Science.gov (United States)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  8. Smart learning objects for smart education in computer science theory, methodology and robot-based implementation

    CERN Document Server

    Stuikys, Vytautas

    2015-01-01

    This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist a

  9. SNMG: a social-level norm-based methodology for macro-governing service collaboration processes

    Science.gov (United States)

    Gao, Ji; Lv, Hexin; Jin, Zhiyong; Xu, Ping

    2017-08-01

    In order to adapt to the accelerative open tendency of collaborations between enterprises, this paper proposes a Social-level Norm-based methodology for Macro-Governing service collaboration processes, called SNMG, to regulate and control the social-level visible macro-behaviors of the social individuals participating in collaborations. SNMG not only can remove effectively the uncontrollability hindrance confronted with by open social activities, but also enables across-management-domain collaborations to be implemented by uniting the centralized controls of social individuals for respective social activities. Therefore, this paper provides a brand-new system construction mode to promote the development and large-scale deployment of service collaborations.

  10. Cell attachment properties of Portland cement-based endodontic materials: biological and methodological considerations.

    Science.gov (United States)

    Ahmed, Hany Mohamed Aly; Luddin, Norhayati; Kannan, Thirumulu Ponnuraj; Mokhtar, Khairani Idah; Ahmad, Azlina

    2014-10-01

    The attachment and spreading of mammalian cells on endodontic biomaterials are an area of active research. The purpose of this review is to discuss the cell attachment properties of Portland cement (PC)-based materials by using scanning electron microscope (SEM). In addition, methodological aspects and technical challenges are discussed. A PubMed electronic search was conducted by using appropriate key words to identify the available investigations on the cell attachment properties of PC-based endodontic materials. After retrieving the full text of related articles, the cross citations were also identified. A total of 23 articles published between January 1993 and October 2013 were identified. This review summarizes the cell attachment properties of commercial and experimental PC-based materials on different cell cultures by using SEM. Methodological procedures, technical challenges, and relevance of SEM in determining the biological profile of PC-based materials are discussed. SEM observations demonstrate that commercial MTA formulations show favorable cell attachment properties, which is consistent with their successful clinical outcomes. The favorable cell attachment properties of PC and its modified formulations support its potential use as a substitute for mineral trioxide aggregate. However, researchers should carefully select cell types for their SEM investigations that would be in contact with the proposed PC-based combinations in the clinical situation. Despite being a technical challenge, SEM provides useful information on the cell attachment properties of PC-based materials; however, other assays for cell proliferation and viability are essential to come up with an accurate in vitro biological profile of any given PC-based formulation. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Synthesis of novel fluorene-based two-photon absorbing molecules and their applications in optical data storage, microfabrication, and stimulated emission depletion

    Science.gov (United States)

    Yanez, Ciceron

    2009-12-01

    Two-photon absorption (2PA) has been used for a number of scientific and technological applications, exploiting the fact that the 2PA probability is directly proportional to the square of the incident light intensity (while one-photon absorption bears a linear relation to the incident light intensity). This intrinsic property of 2PA leads to 3D spatial localization, important in fields such as optical data storage, fluorescence microscopy, and 3D microfabrication. The spatial confinement that 2PA enables has been used to induce photochemical and photophysical events in increasingly smaller volumes and allowed nonlinear, 2PA-based, technologies to reach sub-diffraction limit resolutions. The primary focus of this dissertation is the development of novel, efficient 2PA, fluorene-based molecules to be used either as photoacid generators (PAGs) or fluorophores. A second aim is to develop more effective methods of synthesizing these compounds. As a third and final objective, the new molecules were used to develop a write-once-read many (WORM) optical data storage system, and stimulated emission depletion probes for bioimaging. In Chapter I, the microwave-assisted synthesis of triarylsulfonium salt photoacid generators (PAGs) from their diphenyliodonium counterparts is reported. The microwave-assisted synthesis of these novel sulfonium salts afforded reaction times 90 to 420 times faster than conventional thermal conditions, with photoacid quantum yields of new sulfonium PAGs ranging from 0.01 to 0.4. These PAGs were used to develop a fluorescence readout-based, nonlinear three-dimensional (3D) optical data storage system (Chapter II). In this system, writing was achieved by acid generation upon two-photon absorption (2PA) of a PAG (at 710 or 730 nm). Readout was then performed by interrogating two-photon absorbing dyes, after protonation, at 860 nm. Two-photon recording and readout of voxels was demonstrated in five and eight consecutive, crosstalk-free layers within a

  12. Location of Bioelectricity Plants in the Madrid Community Based on Triticale Crop: A Multicriteria Methodology

    Directory of Open Access Journals (Sweden)

    L. Romero

    2015-01-01

    Full Text Available This paper presents a work whose objective is, first, to quantify the potential of the triticale biomass existing in each of the agricultural regions in the Madrid Community through a crop simulation model based on regression techniques and multiple correlation. Second, a methodology for defining which area has the best conditions for the installation of electricity plants from biomass has been described and applied. The study used a methodology based on compromise programming in a discrete multicriteria decision method (MDM context. To make a ranking, the following criteria were taken into account: biomass potential, electric power infrastructure, road networks, protected spaces, and urban nuclei surfaces. The results indicate that, in the case of the Madrid Community, the Campiña region is the most suitable for setting up plants powered by biomass. A minimum of 17,339.9 tons of triticale will be needed to satisfy the requirements of a 2.2 MW power plant. The minimum range of action for obtaining the biomass necessary in Campiña region would be 6.6 km around the municipality of Algete, based on Geographic Information Systems. The total biomass which could be made available in considering this range in this region would be 18,430.68 t.

  13. The Treatment of Uncertainty in Compensation Schemes for Cancer Based on the Probability of Causation Methodology

    International Nuclear Information System (INIS)

    Koch, J.

    2014-01-01

    Since it is commonly accepted that exposure to ionizing radiation, even at the low levels encountered in the workplace, can cause malignant diseases, radiation workers are at some risk, although much is done to optimize radiation protection and reduce occupational exposure to levels a s low as reasonably achievable . However, the causal relationship between exposure to radiation and malignant diseases is difficult to establish, since cancer is such a frequent disease and many other factors may contribute to its development. Ideally, those workers who developed cancer as a result of occupational exposure to radiation should be compensated. Guidance on procedures and methodology to assess attributability of cancer to occupational exposure to radiation and to assist decision-making in compensating workers is provided in a recent joint IAEA/ILO/WHO publication.This guide also reviews compensation schemes in place in several countries, with an emphasis on those based on the probability of causation (POC), also known as assigned share (AS) methodology. The POC method provides a scientifically based framework to assess cancer attributability to occupational exposure and was extensively reviewed by Wakeford et al. This paper presents a comparison of two well-known compensation schemes based on the POC approach with regard to their treatment of uncertainty

  14. Microscopic to macroscopic depletion model development for FORMOSA-P

    International Nuclear Information System (INIS)

    Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.

    1996-01-01

    Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources

  15. Information security system based on virtual-optics imaging methodology and public key infrastructure

    Science.gov (United States)

    Peng, Xiang; Zhang, Peng; Cai, Lilong

    In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.

  16. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements

    Science.gov (United States)

    do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-01-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID

  17. Optimization of cocoa nib roasting based on sensory properties and colour using response surface methodology

    Directory of Open Access Journals (Sweden)

    D.M.H. A.H. Farah

    2012-05-01

    Full Text Available Roasting of cocoa beans is a critical stage for development of its desirable flavour, aroma and colour. Prior to roasting, cocoa bean may taste astringent, bitter, acidy, musty, unclean, nutty or even chocolate-like, depends on the bean sources and their preparations. After roasting, the bean possesses a typical intense cocoa flavour. The Maillard or non-enzymatic browning reactions is a very important process for the development of cocoa flavor, which occurs primarily during the roasting process and it has generally been agreed that the main flavor components, pyrazines formation is associated within this reaction involving amino acids and reducing sugars. The effect of cocoa nib roasting conditions on sensory properties and colour of cocoa beans were investigated in this study. Roasting conditions in terms of temperature ranged from 110 to 160OC and time ranged from 15 to 40 min were optimized by using Response Surface Methodology based on the cocoa sensory characteristics including chocolate aroma, acidity, astringency, burnt taste and overall acceptability. The analyses used 9- point hedonic scale with twelve trained panelist. The changes in colour due to the roasting condition were also monitored using chromameter. Result of this study showed that sensory quality of cocoa liquor increased with the increase in roasting time and temperature up to 160OC and up to 40 min, respectively. Based on the Response Surface Methodology, the optimised operating condition for the roaster was at temperature of 127OC and time of 25 min. The proposed roasting conditions were able to produce superior quality cocoa beans that will be very useful for cocoa manufactures.Key words : Cocoa, cocoa liquor, flavour, aroma, colour, sensory characteristic, response surface methodology.

  18. Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks

    Science.gov (United States)

    Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.

    2012-05-01

    A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.

  19. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    Science.gov (United States)

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  20. Combining the auxin-inducible degradation system with CRISPR/Cas9-based genome editing for the conditional depletion of endogenous Drosophila melanogaster proteins.

    Science.gov (United States)

    Bence, Melinda; Jankovics, Ferenc; Lukácsovich, Tamás; Erdélyi, Miklós

    2017-04-01

    Inducible protein degradation techniques have considerable advantages over classical genetic approaches, which generate loss-of-function phenotypes at the gene or mRNA level. The plant-derived auxin-inducible degradation system (AID) is a promising technique which enables the degradation of target proteins tagged with the AID motif in nonplant cells. Here, we present a detailed characterization of this method employed during the adult oogenesis of Drosophila. Furthermore, with the help of CRISPR/Cas9-based genome editing, we improve the utility of the AID system in the conditional elimination of endogenously expressed proteins. We demonstrate that the AID system induces efficient and reversible protein depletion of maternally provided proteins both in the ovary and the early embryo. Moreover, the AID system provides a fine spatiotemporal control of protein degradation and allows for the generation of different levels of protein knockdown in a well-regulated manner. These features of the AID system enable the unraveling of the discrete phenotypes of genes with highly complex functions. We utilized this system to generate a conditional loss-of-function allele which allows for the specific degradation of the Vasa protein without affecting its alternative splice variant (solo) and the vasa intronic gene (vig). With the help of this special allele, we demonstrate that dramatic decrease of Vasa protein in the vitellarium does not influence the completion of oogenesis as well as the establishment of proper anteroposterior and dorsoventral polarity in the developing oocyte. Our study suggests that both the localization and the translation of gurken mRNA in the vitellarium is independent from Vasa. © 2017 Federation of European Biochemical Societies.

  1. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  2. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    Science.gov (United States)

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  3. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2012-07-01

    Full Text Available Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT and Web of Things (WoT are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i to integrate sensing and actuating functionalities into everyday objects, and (ii to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD methodology based on the Model Driven Architecture (MDA. This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  4. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  5. IoT-Based Information System for Healthcare Application: Design Methodology Approach

    Directory of Open Access Journals (Sweden)

    Damian Dziak

    2017-06-01

    Full Text Available Over the last few decades, life expectancy has increased significantly. However, elderly people who live on their own often need assistance due to mobility difficulties, symptoms of dementia or other health problems. In such cases, an autonomous supporting system may be helpful. This paper proposes the Internet of Things (IoT-based information system for indoor and outdoor use. Since the conducted survey of related works indicated a lack of methodological approaches to the design process, therefore a Design Methodology (DM, which approaches the design target from the perspective of the stakeholders, contracting authorities and potential users, is introduced. The implemented solution applies the three-axial accelerometer and magnetometer, Pedestrian Dead Reckoning (PDR, thresholding and the decision trees algorithm. Such an architecture enables the localization of a monitored person within four room-zones with accuracy; furthermore, it identifies falls and the activities of lying, standing, sitting and walking. Based on the identified activities, the system classifies current activities as normal, suspicious or dangerous, which is used to notify the healthcare staff about possible problems. The real-life scenarios validated the high robustness of the proposed solution. Moreover, the test results satisfied both stakeholders and future users and ensured further cooperation with the project.

  6. Pressure-based high-order TVD methodology for dynamic stall control

    Science.gov (United States)

    Yang, H. Q.; Przekwas, A. J.

    1992-01-01

    The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.

  7. [Activity-based costing methodology to manage resources in intensive care units].

    Science.gov (United States)

    Alvear V, Sandra; Canteros G, Jorge; Jara M, Juan; Rodríguez C, Patricia

    2013-11-01

    An accurate estimation of resources use by individual patients is crucial in hospital management. To measure financial costs of health care actions in intensive care units of two public regional hospitals in Chile. Prospective follow up of 716 patients admitted to two intensive care units during 2011. The financial costs of health care activities was calculated using the Activity-Based Costing methodology. The main activities recorded were procedures and treatments, monitoring, response to patient needs, patient maintenance and coordination. Activity-Based Costs, including human resources and assorted indirect costs correspond to 81 to 88% of costs per disease in one hospital and 69 to 80% in the other. The costs associated to procedures and treatments are the most significant and are approximately $100,000 (Chilean pesos) per day of hospitalization. The second most significant cost corresponds to coordination activities, which fluctuates between $86,000 and 122,000 (Chilean pesos). There are significant differences in resources use between the two hospitals studied. Therefore cost estimation methodologies should be incorporated in the management of these clinical services.

  8. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  9. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  10. Project-based learning methodology in the area of microbiology applied to undergraduate medical research.

    Science.gov (United States)

    Mateo, Estibaliz; Sevillano, Elena

    2018-07-01

    In the recent years, there has been a decrease in the number of medical professionals dedicated to a research career. There is evidence that students with a research experience during their training acquire knowledge and skills that increase the probability of getting involved in research more successfully. In the Degree of Medicine (University of the Basque Country) the annual core subject 'Research Project' introduces students to research. The aim of this work was to implement a project-based learning methodology, with the students working on microbiology, and to analyse its result along time. Given an initial scenario, the students had to come up with a research idea related to medical microbiology and to carry out a research project, including writing a funding proposal, developing the experimental assays and analyzing and presenting their results to a congress organized by the University. Summative assessment was performed by both students and teachers. A satisfaction survey was carried out to gather the students' opinion. The overall results regarding to the classroom dynamics, learning results and motivation after the implementation were favourable. Students referred a greater interest about research than they had before. They would choose the project based methodology versus the traditional one.

  11. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  12. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.

    Science.gov (United States)

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-08-24

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.

  13. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  14. Riddle of depleted uranium

    International Nuclear Information System (INIS)

    Hussein, A.S.

    2005-01-01

    Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed

  15. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  16. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  17. A Unified Methodology for Aerospace Systems Integration Based on Entropy and the Second Law of Thermodynamics: Aerodynamics Assessment

    National Research Council Canada - National Science Library

    Camberos, Jose A; Nomura, Shohei; Stewart, Jason; Figliola, Richard

    2004-01-01

    .... The objective of this project is to relate work-potential losses (exergy destruction) to the aerodynamics forces in an attempt to validate a new design methodology based on the second law of thermodynamic...

  18. Simulating fission product transients via the history-based local-parameter methodology

    International Nuclear Information System (INIS)

    Jenkins, D.A.; Rouben, B.; Salvatore, M.

    1993-01-01

    This paper describes the fission-product-calculation capacity of the history-based local-parameter methodology for evaluating lattice properties for use in core-tracking calculations in CANDU reactors. In addition to taking into account the individual past history of each bundles flux/power level, fuel temperature, and coolant density and temperature that the bundle has seen during its stay in the core, the latest refinement of the history-based method provides the capability of fission-product-drivers. It allows the bundle-specific concentrations of the three basic groups of saturating fission products to be calculated in steady state or following a power transient, including long shutdowns. The new capability is illustrated by simulating the startup period following a typical long-shutdown, starting from a snapshot in the Point Lepreau operating history. 9 refs., 7 tabs

  19. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  20. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    Science.gov (United States)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  1. The Development Of Learning Sets And Research Methodology Module Using Problem Based Learning For Accounting Education Students

    OpenAIRE

    Thomas, Partono; Nurkhin, Ahmad

    2016-01-01

    Improving the learning process is very important for every lecturer by implement innovative learning methods or media. The purpose of this study is to develop a research methodology learning instruction and module based of problem based learning for accounting education students. This research applied research and development design in the research methodology course in Economics Education (Accounting) Department, Faculty Of Economics, Semarang State University. Data analysis was used to test...

  2. The activity-based methodology to assess ship emissions - A review

    International Nuclear Information System (INIS)

    Nunes, R.A.O.; Alvim-Ferraz, M.C.M.; Martins, F.G.; Sousa, S.I.V.

    2017-01-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories

  3. The Toxicity of Depleted Uranium

    OpenAIRE

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...

  4. A methodology for a quantitative assessment of safety culture in NPPs based on Bayesian networks

    International Nuclear Information System (INIS)

    Kim, Young Gab; Lee, Seung Min; Seong, Poong Hyun

    2017-01-01

    Highlights: • A safety culture framework and a quantitative methodology to assess safety culture were proposed. • The relation among Norm system, Safety Management System and worker's awareness was established. • Safety culture probability at NPPs was updated by collecting actual organizational data. • Vulnerable areas and the relationship between safety culture and human error were confirmed. - Abstract: For a long time, safety has been recognized as a top priority in high-reliability industries such as aviation and nuclear power plants (NPPs). Establishing a safety culture requires a number of actions to enhance safety, one of which is changing the safety culture awareness of workers. The concept of safety culture in the nuclear power domain was established in the International Atomic Energy Agency (IAEA) safety series, wherein the importance of employee attitudes for maintaining organizational safety was emphasized. Safety culture assessment is a critical step in the process of enhancing safety culture. In this respect, assessment is focused on measuring the level of safety culture in an organization, and improving any weakness in the organization. However, many continue to think that the concept of safety culture is abstract and unclear. In addition, the results of safety culture assessments are mostly subjective and qualitative. Given the current situation, this paper suggests a quantitative methodology for safety culture assessments based on a Bayesian network. A proposed safety culture framework for NPPs would include the following: (1) a norm system, (2) a safety management system, (3) safety culture awareness of worker, and (4) Worker behavior. The level of safety culture awareness of workers at NPPs was reasoned through the proposed methodology. Then, areas of the organization that were vulnerable in terms of safety culture were derived by analyzing observational evidence. We also confirmed that the frequency of events involving human error

  5. Application of machine learning methodology for pet-based definition of lung cancer

    Science.gov (United States)

    Kerhet, A.; Small, C.; Quon, H.; Riauka, T.; Schrader, L.; Greiner, R.; Yee, D.; McEwan, A.; Roa, W.

    2010-01-01

    We applied a learning methodology framework to assist in the threshold-based segmentation of non-small-cell lung cancer (nsclc) tumours in positron-emission tomography–computed tomography (pet–ct) imaging for use in radiotherapy planning. Gated and standard free-breathing studies of two patients were independently analysed (four studies in total). Each study had a pet–ct and a treatment-planning ct image. The reference gross tumour volume (gtv) was identified by two experienced radiation oncologists who also determined reference standardized uptake value (suv) thresholds that most closely approximated the gtv contour on each slice. A set of uptake distribution-related attributes was calculated for each pet slice. A machine learning algorithm was trained on a subset of the pet slices to cope with slice-to-slice variation in the optimal suv threshold: that is, to predict the most appropriate suv threshold from the calculated attributes for each slice. The algorithm’s performance was evaluated using the remainder of the pet slices. A high degree of geometric similarity was achieved between the areas outlined by the predicted and the reference suv thresholds (Jaccard index exceeding 0.82). No significant difference was found between the gated and the free-breathing results in the same patient. In this preliminary work, we demonstrated the potential applicability of a machine learning methodology as an auxiliary tool for radiation treatment planning in nsclc. PMID:20179802

  6. A Statistical Methodology for Determination of Safety Systems Actuation Setpoints Based on Extreme Value Statistics

    Directory of Open Access Journals (Sweden)

    D. R. Novog

    2008-01-01

    Full Text Available This paper provides a novel and robust methodology for determination of nuclear reactor trip setpoints which accounts for uncertainties in input parameters and models, as well as accounting for the variations in operating states that periodically occur. Further it demonstrates that in performing best estimate and uncertainty calculations, it is critical to consider the impact of all fuel channels and instrumentation in the integration of these uncertainties in setpoint determination. This methodology is based on the concept of a true trip setpoint, which is the reactor setpoint that would be required in an ideal situation where all key inputs and plant responses were known, such that during the accident sequence a reactor shutdown will occur which just prevents the acceptance criteria from being exceeded. Since this true value cannot be established, the uncertainties in plant simulations and plant measurements as well as operational variations which lead to time changes in the true value of initial conditions must be considered. This paper presents the general concept used to determine the actuation setpoints considering the uncertainties and changes in initial conditions, and allowing for safety systems instrumentation redundancy. The results demonstrate unique statistical behavior with respect to both fuel and instrumentation uncertainties which has not previously been investigated.

  7. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  8. Time-to-event methodology improved statistical evaluation in register-based health services research.

    Science.gov (United States)

    Bluhmki, Tobias; Bramlage, Peter; Volk, Michael; Kaltheuner, Matthias; Danne, Thomas; Rathmann, Wolfgang; Beyersmann, Jan

    2017-02-01

    Complex longitudinal sampling and the observational structure of patient registers in health services research are associated with methodological challenges regarding data management and statistical evaluation. We exemplify common pitfalls and want to stimulate discussions on the design, development, and deployment of future longitudinal patient registers and register-based studies. For illustrative purposes, we use data from the prospective, observational, German DIabetes Versorgungs-Evaluation register. One aim was to explore predictors for the initiation of a basal insulin supported therapy in patients with type 2 diabetes initially prescribed to glucose-lowering drugs alone. Major challenges are missing mortality information, time-dependent outcomes, delayed study entries, different follow-up times, and competing events. We show that time-to-event methodology is a valuable tool for improved statistical evaluation of register data and should be preferred to simple case-control approaches. Patient registers provide rich data sources for health services research. Analyses are accompanied with the trade-off between data availability, clinical plausibility, and statistical feasibility. Cox' proportional hazards model allows for the evaluation of the outcome-specific hazards, but prediction of outcome probabilities is compromised by missing mortality information. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Assessment of bioenergy potential in Sicily: A GIS-based support methodology

    International Nuclear Information System (INIS)

    Beccali, Marco; D'Alberti, Vincenzo; Franzitta, Vincenzo; Columba, Pietro

    2009-01-01

    A Geographical Information System (GIS) supported methodology has been developed in order to assess the technical and economic potential of biomass exploitation for energy production in Sicily. The methodology was based on the use of agricultural, economic, climatic, and infrastructural data in a GIS. Data about land use, transportation facilities, urban cartography, regional territorial planning, terrain digital model, lithology, climatic types, and civil and industrial users have been stored in the GIS to define potential areas for gathering the residues coming from the pruning of olive groves, vineyards, and other agricultural crops, and to assess biomass available for energy cultivation. Further, it was possible to assess the potential of biodiesel production, supposing the cultivation of rapeseed in arable crop areas. For the biomass used for direct combustion purposes, the economic availability has been assessed assuming a price of the biomass and comparing it with other fuels. This assessment has shown the strong competitiveness of firewood in comparison with traditional fossil fuels when the collection system is implemented in an efficient way. Moreover, the economic potential of biodiesel was assessed considering the on-going financial regime for fuel. At the same time, the study has shown a significant competitiveness of the finished biomass (pellets), and good potential for a long-term development of this market. An important result was the determination of biofuel production potential in Sicily. An outcome of the study was to show the opportunities stemming from the harmonisation of Energy Policy with the Waste Management System and Rural Development Plan. (author)

  10. Active teaching-learning methodologies: medical students' views of problem-based learning

    Directory of Open Access Journals (Sweden)

    José Roberto Bittencourt Costa

    Full Text Available The prevailing undergraduate medical training process still favors disconnection and professional distancing from social needs. The Brazilian Ministries of Education and Health, through the National Curriculum Guidelines, the Incentives Program for Changes in the Medical Curriculum (PROMED, and the National Program for Reorientation of Professional Training in Health (PRO-SAÚDE, promoted the stimulus for an effective connection between medical institutions and the Unified National Health System (SUS. In accordance to the new paradigm for medical training, the Centro Universitário Serra dos Órgãos (UNIFESO established a teaching plan in 2005 using active methodologies, specifically problem-based learning (PBL. Research was conducted through semi-structured interviews with third-year undergraduate students at the UNIFESO Medical School. The results were categorized as proposed by Bardin's thematic analysis, with the purpose of verifying the students' impressions of the new curriculum. Active methodologies proved to be well-accepted by students, who defined them as exciting and inclusive of theory and practice in medical education.

  11. Vocalist - an international programme for the validation of constraint based methodology in structural integrity

    International Nuclear Information System (INIS)

    Lidbury, D.; Bass, R.; Gilles, Ph.; Connors, D.; Eisele, U.; Keim, E.; Keinanen, H.; Marie, St.; Nagel, G.; Taylor, N.; Wadier, Y.

    2001-01-01

    The pattern of crack-tip stresses and strains causing plastic flow and fracture in components is different to that in test specimens. This gives rise to the so-called constraint effect. Crack-tip constraint in components is generally lower than in test specimens. Effective toughness is correspondingly higher. The fracture toughness measured on test specimens is thus likely to underestimate that exhibited by cracks in components. A 36-month programme was initiated in October 2000 as part of the Fifth Framework of the European Atomic Energy Community (EURATOM), with the objective of achieving (i) an improved defect assessment methodology for predicting safety margins; (ii) improved lifetime management arguments. The programme VOCALIST (Validation of Constraint Based Methodology in Structural Integrity) is one of a 'cluster' of Fifth Framework projects in the area of Plant Life Management (Nuclear Fission). VOCALIST is also an associated project of NESC (Network for Evaluating Steel Components). The present paper describes the aims and objectives of VOCALIST, its interactions with NESC, and gives details of its various Work Packages. (authors)

  12. Vocalist - an international programme for the validation of constraint based methodology in structural integrity

    Energy Technology Data Exchange (ETDEWEB)

    Lidbury, D. [AEA Technology, Consulting (United Kingdom); Bass, R. [Oak Ridge National Lab., TN (United States); Gilles, Ph. [FRAMATOME, 92 - Paris-La-Defence (France); Connors, D. [BNFL Magnox Generation (United Kingdom); Eisele, U. [Multiphoton Absorption, MPA, Stuttgart (Germany); Keim, E. [Framatome ANP GmbH (Germany); Keinanen, H. [VTT Energy, Espoo (Finland); Marie, St. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif sur Yvette (France); Nagel, G. [E.ON Kernraft (Germany); Taylor, N. [Joint Research Center, JRC-IAM (Netherlands); Wadier, Y. [Electricite de France (EDF), 93 - Saint-Denis (France). Dept. de Radioprotection

    2001-07-01

    The pattern of crack-tip stresses and strains causing plastic flow and fracture in components is different to that in test specimens. This gives rise to the so-called constraint effect. Crack-tip constraint in components is generally lower than in test specimens. Effective toughness is correspondingly higher. The fracture toughness measured on test specimens is thus likely to underestimate that exhibited by cracks in components. A 36-month programme was initiated in October 2000 as part of the Fifth Framework of the European Atomic Energy Community (EURATOM), with the objective of achieving (i) an improved defect assessment methodology for predicting safety margins; (ii) improved lifetime management arguments. The programme VOCALIST (Validation of Constraint Based Methodology in Structural Integrity) is one of a 'cluster' of Fifth Framework projects in the area of Plant Life Management (Nuclear Fission). VOCALIST is also an associated project of NESC (Network for Evaluating Steel Components). The present paper describes the aims and objectives of VOCALIST, its interactions with NESC, and gives details of its various Work Packages. (authors)

  13. SCIRehab uses practice-based evidence methodology to associate patient and treatment characteristics with outcomes.

    Science.gov (United States)

    Whiteneck, Gale G; Gassaway, Julie

    2013-04-01

    To describe the application of practice-based evidence (PBE) methodology to spinal cord injury (SCI) rehabilitation in the SCIRehab study, and to summarize associations of patient characteristics and treatment interventions to outcomes. Prospective observational study. Six SCI rehabilitation centers. Patients with traumatic SCI (N=1376) admitted for first rehabilitation. Not applicable. FIM and residence at discharge, and FIM, residence, Craig Handicap Assessment and Reporting Technique, work/school status, Patient Health Questionnaire-9, Diener Satisfaction with Life Scale, rehospitalization, and presence of pressure ulcers at 1 year postinjury. Patient demographic and injury characteristics explained significant variation in rehabilitation outcomes, particularly functional outcomes. Regression modeling also identified a large number of significant associations with outcomes when total time in each discipline was modeled and when models were developed for each discipline, examining time spent in the many specific interventions provided by each discipline. The application of PBE methodology in the SCIRehab study provided extensive information about the process of inpatient SCI rehabilitation. While patient demographic and injury characteristics explain substantial variation in rehabilitation outcomes, particularly functional outcomes, significant relations also were found between the type and quantity of treatment interventions delivered by each rehabilitation discipline and a broad range of outcomes. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Computationally based methodology for reengineering the high-level waste planning process at SRS

    International Nuclear Information System (INIS)

    Paul, P.K.; Gregory, M.V.; Wells, M.N.

    1997-01-01

    The Savannah River Site (SRS) has started processing its legacy of 34 million gallons of high-level radioactive waste into its final disposable form. The SRS high-level waste (HLW) complex consists of 51 waste storage tanks, 3 evaporators, 6 waste treatment operations, and 2 waste disposal facilities. It is estimated that processing wastes to clean up all tanks will take 30+ yr of operation. Integrating all the highly interactive facility operations through the entire life cycle in an optimal fashion-while meeting all the budgetary, regulatory, and operational constraints and priorities-is a complex and challenging planning task. The waste complex operating plan for the entire time span is periodically published as an SRS report. A computationally based integrated methodology has been developed that has streamlined the planning process while showing how to run the operations at economically and operationally optimal conditions. The integrated computational model replaced a host of disconnected spreadsheet calculations and the analysts' trial-and-error solutions using various scenario choices. This paper presents the important features of the integrated computational methodology and highlights the parameters that are core components of the planning process

  15. Evaluation methodology based on physical security assessment results: a utility theory approach

    International Nuclear Information System (INIS)

    Bennett, H.A.; Olascoaga, M.T.

    1978-03-01

    This report describes an evaluation methodology which aggregates physical security assessment results for nuclear facilities into an overall measure of adequacy. This methodology utilizes utility theory and conforms to a hierarchical structure developed by the NRC. Implementation of the methodology is illustrated by several examples. Recommendations for improvements in the evaluation process are given

  16. Development of a consistent Monte Carlo-deterministic transport methodology based on the method of characteristics and MCNP5

    International Nuclear Information System (INIS)

    Karriem, Z.; Ivanov, K.; Zamonsky, O.

    2011-01-01

    This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)

  17. An Evidence Based Methodology to Facilitate Public Library Non-fiction Collection Development

    Directory of Open Access Journals (Sweden)

    Matthew Kelly

    2015-12-01

    Full Text Available Objective – This research was designed as a pilot study to test a methodology for subject based collection analysis for public libraries. Methods – WorldCat collection data from eight Australian public libraries was extracted using the Collection Evaluation application. The data was aggregated and filtered to assess how the sample’s titles could be compared against the OCLC Conspectus subject categories. A hierarchy of emphasis emerged and this was divided into tiers ranging from 1% of the sample. These tiers were further analysed to quantify their representativeness against both the sample’s titles and the subject categories taken as a whole. The interpretive aspect of the study sought to understand the types of knowledge embedded in the tiers and was underpinned by hermeneutic phenomenology. Results – The study revealed that there was a marked tendency for a small percentage of subject categories to constitute a large proportion of the potential topicality that might have been represented in these types of collections. The study also found that distribution of the aggregated collection conformed to a Power Law distribution (80/20 so that approximately 80% of the collection was represented by 20% of the subject categories. The study also found that there were significant commonalities in the types of subject categories that were found in the designated tiers and that it may be possible to develop ontologies that correspond to the collection tiers. Conclusions – The evidence-based methodology developed in this pilot study has the potential for further development to help to improve the practice of collection development. The introduction of the concept of the epistemic role played by collection tiers is a promising aid to inform our understanding of knowledge organization for public libraries. The research shows a way forward to help to link subjective decision making with a scientifically based approach to managing knowledge

  18. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  19. Groundwater Depletion Embedded in International Food Trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-01-01

    Recent hydrological modeling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  20. Groundwater depletion embedded in international food trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  1. Towards A Model-based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    Directory of Open Access Journals (Sweden)

    Gautam Biswas

    2012-12-01

    Full Text Available This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter approach in conjunction with an empirical state-based degradation model to predict the degradation of capacitor parameters through the life of the capacitor. Electrolytic capacitors are important components of systems that range from power supplies on critical avion- ics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their critical role in the system, they are good candidates for component level prognostics and health management. Prognostics provides a way to assess remain- ing useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. This paper proposes and empirical degradation model and discusses experimental results for an accelerated aging test performed on a set of identical capacitors subjected to electrical stress. The data forms the basis for developing the Kalman-filter based remaining life prediction algorithm.

  2. A G-function-based reliability-based design methodology applied to a cam roller system

    International Nuclear Information System (INIS)

    Wang, W.; Sui, P.; Wu, Y.T.

    1996-01-01

    Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application

  3. Construction Material-Based Methodology for Military Contingency Base Construction: Case Study of Maiduguri, Nigeria

    Science.gov (United States)

    2016-09-01

    FEMA Federal Emergency Management Agency FOB Forward Operating Base GIS geographic information system HSS hollow structural section HT high...rods. These products include mild steel (MS) and high ERDC/CERL TR-16-19 96 tensile ( HT ) ribbed bars from billets, plain bars, and rebar (“Dangote...use producers are those who produce blocks strictly for private use. Some clients and contractors engage in the business of making blocks for private

  4. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    Institute of Scientific and Technical Information of China (English)

    TONG Lili; CAO Xuewu

    2008-01-01

    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  5. A Methodology to Detect and Update Active Deformation Areas Based on Sentinel-1 SAR Images

    Directory of Open Access Journals (Sweden)

    Anna Barra

    2017-09-01

    Full Text Available This work is focused on deformation activity mapping and monitoring using Sentinel-1 (S-1 data and the DInSAR (Differential Interferometric Synthetic Aperture Radar technique. The main goal is to present a procedure to periodically update and assess the geohazard activity (volcanic activity, landslides and ground-subsidence of a given area by exploiting the wide area coverage and the high coherence and temporal sampling (revisit time up to six days provided by the S-1 satellites. The main products of the procedure are two updatable maps: the deformation activity map and the active deformation areas map. These maps present two different levels of information aimed at different levels of geohazard risk management, from a very simplified level of information to the classical deformation map based on SAR interferometry. The methodology has been successfully applied to La Gomera, Tenerife and Gran Canaria Islands (Canary Island archipelago. The main obtained results are discussed.

  6. Challenges in implementing a Planetary Boundaries based Life-Cycle Impact Assessment methodology

    DEFF Research Database (Denmark)

    Ryberg, Morten; Owsianiak, Mikolaj; Richardson, Katherine

    2016-01-01

    of resolving the challenges and developing such methodology is discussed. The challenges are related to technical issues, i.e., modelling and including the Earth System processes and their control variables as impact categories in Life-Cycle Impact Assessment and to theoretical considerations with respect...... to the interpretation and use of Life-Cycle Assessment results in accordance with the Planetary Boundary framework. The identified challenges require additional research before a Planetary Boundaries based Life-Cycle Impact Assessment method can be developed. Research on modelling the impacts on Earth System processes......Impacts on the environment from human activities are now threatening to exceed thresholds for central Earth System processes, potentially moving the Earth System out of the Holocene state. To avoid such consequences, the concept of Planetary Boundaries was defined in 2009, and updated in 2015...

  7. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.

    Science.gov (United States)

    Somogyi, Endre; Hagar, Amit; Glazier, James A

    2016-12-01

    Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.

  8. A Probabilistic and Observation Based Methodology to Estimate Small Craft Harbor Vulnerability to Tsunami Events

    Science.gov (United States)

    Keen, A. S.; Lynett, P. J.; Ayca, A.

    2016-12-01

    Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once

  9. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    Science.gov (United States)

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  10. Improving patient care in cardiac surgery using Toyota production system based methodology.

    Science.gov (United States)

    Culig, Michael H; Kunkle, Richard F; Frndak, Diane C; Grunden, Naida; Maher, Thomas D; Magovern, George J

    2011-02-01

    A new cardiac surgery program was developed in a community hospital setting using the operational excellence (OE) method, which is based on the principles of the Toyota production system. The initial results of the first 409 heart operations, performed over the 28 months between March 1, 2008, and June 30, 2010, are presented. Operational excellence methodology was taught to the cardiac surgery team. Coaching started 2 months before the opening of the program and continued for 24 months. Of the 409 cases presented, 253 were isolated coronary artery bypass graft operations. One operative death occurred. According to the database maintained by The Society of Thoracic Surgeons, the risk-adjusted operative mortality rate was 61% lower than the regional rate. Likewise, the risk-adjusted rate of major complications was 57% lower than The Society of Thoracic Surgeons regional rate. Daily solution to determine cause was attempted on 923 distinct perioperative problems by all team members. Using the cost of complications as described by Speir and coworkers, avoiding predicted complications resulted in a savings of at least $884,900 as compared with the regional average. By the systematic use of a real time, highly formatted problem-solving methodology, processes of care improved daily. Using carefully disciplined teamwork, reliable implementation of evidence-based protocols was realized by empowering the front line to make improvements. Low rates of complications were observed, and a cost savings of $3,497 per each case of isolated coronary artery bypass graft was realized. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Methodological Bases for Describing Risks of the Enterprise Business Model in Integrated Reporting

    Directory of Open Access Journals (Sweden)

    Nesterenko Oksana O.

    2017-12-01

    Full Text Available The aim of the article is to substantiate the methodological bases for describing the business and accounting risks of an enterprise business model in integrated reporting for their timely detection and assessment, and develop methods for their leveling or minimizing and possible prevention. It is proposed to consider risks in the process of forming integrated reporting from two sides: first, risks that arise in the business model of an organization and should be disclosed in its integrated report; second, accounting risks of integrated reporting, which should be taken into account by members of the cross-sectoral working group and management personnel in the process of forming and promulgating integrated reporting. To develop an adequate accounting and analytical tool for disclosure of information about the risks of the business model and integrated reporting, their leveling or minimization, in the article a terminological analysis of the essence of entrepreneurial and accounting risks is carried out. The entrepreneurial risk is defined as an objective-subjective economic category that characterizes the probability of negative or positive consequences of economic-social-ecological activity within the framework of the business model of an enterprise under uncertainty. The accounting risk is suggested to be understood as the probability of unfavorable consequences as a result of organizational, methodological errors in the integrated accounting system, which present threat to the quality, accuracy and reliability of the reporting information on economic, social and environmental activities in integrated reporting as well as threat of inappropriate decision-making by stakeholders based on the integrated report. For the timely identification of business risks and maximum leveling of the influence of accounting risks on the process of formation and publication of integrated reporting, in the study the place of entrepreneurial and accounting risks in

  12. The activity-based methodology to assess ship emissions - A review.

    Science.gov (United States)

    Nunes, R A O; Alvim-Ferraz, M C M; Martins, F G; Sousa, S I V

    2017-12-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories for an

  13. Improving Efficiency Using Time-Driven Activity-Based Costing Methodology.

    Science.gov (United States)

    Tibor, Laura C; Schultz, Stacy R; Menaker, Ronald; Weber, Bradley D; Ness, Jay; Smith, Paula; Young, Phillip M

    2017-03-01

    The aim of this study was to increase efficiency in MR enterography using a time-driven activity-based costing methodology. In February 2015, a multidisciplinary team was formed to identify the personnel, equipment, space, and supply costs of providing outpatient MR enterography. The team mapped the current state, completed observations, performed timings, and calculated costs associated with each element of the process. The team used Pareto charts to understand the highest cost and most time-consuming activities, brainstormed opportunities, and assessed impact. Plan-do-study-act cycles were developed to test the changes, and run charts were used to monitor progress. The process changes consisted of revising the workflow associated with the preparation and administration of glucagon, with completed implementation in November 2015. The time-driven activity-based costing methodology allowed the radiology department to develop a process to more accurately identify the costs of providing MR enterography. The primary process modification was reassigning responsibility for the administration of glucagon from nurses to technologists. After implementation, the improvements demonstrated success by reducing non-value-added steps and cost by 13%, staff time by 16%, and patient process time by 17%. The saved process time was used to augment existing examination time slots to more accurately accommodate the entire enterographic examination. Anecdotal comments were captured to validate improved staff satisfaction within the multidisciplinary team. This process provided a successful outcome to address daily workflow frustrations that could not previously be improved. A multidisciplinary team was necessary to achieve success, in addition to the use of a structured problem-solving approach. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. Software development methodology for computer based I&C systems of prototype fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2015-10-15

    Highlights: • Software development methodology adopted for computer based I&C systems of PFBR is detailed. • Constraints imposed as part of software requirements and coding phase are elaborated. • Compliance to safety and security requirements are described. • Usage of CASE (Computer Aided Software Engineering) tools during software design, analysis and testing phase are explained. - Abstract: Prototype Fast Breeder Reactor (PFBR) is sodium cooled reactor which is in the advanced stage of construction in Kalpakkam, India. Versa Module Europa bus based Real Time Computer (RTC) systems are deployed for Instrumentation & Control of PFBR. RTC systems have to perform safety functions within the stipulated time which calls for highly dependable software. Hence, well defined software development methodology is adopted for RTC systems starting from the requirement capture phase till the final validation of the software product. V-model is used for software development. IEC 60880 standard and AERB SG D-25 guideline are followed at each phase of software development. Requirements documents and design documents are prepared as per IEEE standards. Defensive programming strategies are followed for software development using C language. Verification and validation (V&V) of documents and software are carried out at each phase by independent V&V committee. Computer aided software engineering tools are used for software modelling, checking for MISRA C compliance and to carry out static and dynamic analysis. Various software metrics such as cyclomatic complexity, nesting depth and comment to code are checked. Test cases are generated using equivalence class partitioning, boundary value analysis and cause and effect graphing techniques. System integration testing is carried out wherein functional and performance requirements of the system are monitored.

  15. Software development methodology for computer based I&C systems of prototype fast breeder reactor

    International Nuclear Information System (INIS)

    Manimaran, M.; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2015-01-01

    Highlights: • Software development methodology adopted for computer based I&C systems of PFBR is detailed. • Constraints imposed as part of software requirements and coding phase are elaborated. • Compliance to safety and security requirements are described. • Usage of CASE (Computer Aided Software Engineering) tools during software design, analysis and testing phase are explained. - Abstract: Prototype Fast Breeder Reactor (PFBR) is sodium cooled reactor which is in the advanced stage of construction in Kalpakkam, India. Versa Module Europa bus based Real Time Computer (RTC) systems are deployed for Instrumentation & Control of PFBR. RTC systems have to perform safety functions within the stipulated time which calls for highly dependable software. Hence, well defined software development methodology is adopted for RTC systems starting from the requirement capture phase till the final validation of the software product. V-model is used for software development. IEC 60880 standard and AERB SG D-25 guideline are followed at each phase of software development. Requirements documents and design documents are prepared as per IEEE standards. Defensive programming strategies are followed for software development using C language. Verification and validation (V&V) of documents and software are carried out at each phase by independent V&V committee. Computer aided software engineering tools are used for software modelling, checking for MISRA C compliance and to carry out static and dynamic analysis. Various software metrics such as cyclomatic complexity, nesting depth and comment to code are checked. Test cases are generated using equivalence class partitioning, boundary value analysis and cause and effect graphing techniques. System integration testing is carried out wherein functional and performance requirements of the system are monitored

  16. Research on Methodology to Prioritize Critical Digital Assets based on Nuclear Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Wonjik; Kwon, Kookheui; Kim, Hyundoo [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2016-10-15

    Digital systems are used in nuclear facilities to monitor and control various types of field devices, as well as to obtain and store vital information. Therefore, it is getting important for nuclear facilities to protect digital systems from cyber-attack in terms of safety operation and public health since cyber compromise of these systems could lead to unacceptable radiological consequences. Based on KINAC/RS-015 which is a cyber security regulatory standard, regulatory activities for cyber security at nuclear facilities generally focus on critical digital assets (CDAs) which are safety, security, and emergency preparedness related digital assets. Critical digital assets are estimated over 60% among all digital assets in a nuclear power plant. Therefore, it was required to prioritize critical digital assets to improve efficiency of regulation and implementation. In this paper, the research status on methodology development to prioritize critical digital assets based on nuclear risk assessment will be introduced. In this paper, to derive digital asset directly affect accident, PRA results (ET, FT, and minimal cut set) are analyzed. According to result of analysis, digital systems related to CD are derived ESF-CCS (safety-related component control system) and Process-CCS (non-safety-related component control system) as well as Engineered Safety Features Actuation System (ESFAS). These digital assets can be identified Vital Digital Asset (VDA). Hereafter, to develop general methodology which was identified VDA related to accident among CDAs, (1) method using result of minimal cut set in PRA model will be studied and (2) method quantifying result of Digital I and C PRA which is performed to reflect all digital cabinet related to system in FT will be studied.

  17. Research on Methodology to Prioritize Critical Digital Assets based on Nuclear Risk Assessment

    International Nuclear Information System (INIS)

    Kim, Wonjik; Kwon, Kookheui; Kim, Hyundoo

    2016-01-01

    Digital systems are used in nuclear facilities to monitor and control various types of field devices, as well as to obtain and store vital information. Therefore, it is getting important for nuclear facilities to protect digital systems from cyber-attack in terms of safety operation and public health since cyber compromise of these systems could lead to unacceptable radiological consequences. Based on KINAC/RS-015 which is a cyber security regulatory standard, regulatory activities for cyber security at nuclear facilities generally focus on critical digital assets (CDAs) which are safety, security, and emergency preparedness related digital assets. Critical digital assets are estimated over 60% among all digital assets in a nuclear power plant. Therefore, it was required to prioritize critical digital assets to improve efficiency of regulation and implementation. In this paper, the research status on methodology development to prioritize critical digital assets based on nuclear risk assessment will be introduced. In this paper, to derive digital asset directly affect accident, PRA results (ET, FT, and minimal cut set) are analyzed. According to result of analysis, digital systems related to CD are derived ESF-CCS (safety-related component control system) and Process-CCS (non-safety-related component control system) as well as Engineered Safety Features Actuation System (ESFAS). These digital assets can be identified Vital Digital Asset (VDA). Hereafter, to develop general methodology which was identified VDA related to accident among CDAs, (1) method using result of minimal cut set in PRA model will be studied and (2) method quantifying result of Digital I and C PRA which is performed to reflect all digital cabinet related to system in FT will be studied

  18. The Toxicity of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Wayne Briner

    2010-01-01

    Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.

  19. Ego depletion impairs implicit learning.

    Science.gov (United States)

    Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  20. A core-monitoring based methodology for predictions of graphite weight loss in AGR moderator bricks

    Energy Technology Data Exchange (ETDEWEB)

    McNally, K., E-mail: kevin.mcnally@hsl.gsi.gov.uk [Health and Safety Laboratory, Harpur Hill, Buxton, Derbyshire SK17 9JN (United Kingdom); Warren, N. [Health and Safety Laboratory, Harpur Hill, Buxton, Derbyshire SK17 9JN (United Kingdom); Fahad, M.; Hall, G.; Marsden, B.J. [Nuclear Graphite Research Group, School of MACE, University of Manchester, Manchester M13 9PL (United Kingdom)

    2017-04-01

    Highlights: • A statistically-based methodology for estimating graphite density is presented. • Graphite shrinkage is accounted for using a finite element model. • Differences in weight loss forecasts were found when compared to the existing model. - Abstract: Physically based models, resolved using the finite element (FE) method are often used to model changes in dimensions and the associated stress fields of graphite moderator bricks within a reactor. These models require inputs that describe the loading conditions (temperature, fluence and weight loss ‘field variables’), and coded relationships describing the behaviour of graphite under these conditions. The weight loss field variables are calculated using a reactor chemistry/physics code FEAT DIFFUSE. In this work the authors consider an alternative data source of weight loss: that from a longitudinal dataset of density measurements made on small samples trepanned from operating reactors during statutory outages. A nonlinear mixed-effect model is presented for modelling the age and depth-related trends in density. A correction that accounts for irradiation-induced dimensional changes (axial and radial shrinkage) is subsequently applied. The authors compare weight loss forecasts made using FEAT DIFFUSE with those based on an alternative statistical model for a layer four moderator brick for the Hinkley Point B, Reactor 3. The authors compare the two approaches for the weight loss distribution through the brick with a particular focus on the interstitial keyway, and for the average (over the volume of the brick) weight loss.

  1. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  2. An experimental strategy validated to design cost-effective culture media based on response surface methodology.

    Science.gov (United States)

    Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H

    2017-07-03

    For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.

  3. Defatted flaxseed meal incorporated corn-rice flour blend based extruded product by response surface methodology.

    Science.gov (United States)

    Ganorkar, Pravin M; Patel, Jhanvi M; Shah, Vrushti; Rangrej, Vihang V

    2016-04-01

    Considering the evidence of flaxseed and its defatted flaxseed meal (DFM) for human health benefits, response surface methodology (RSM) based on three level four factor central composite rotatable design (CCRD) was employed for the development of DFM incorporated corn - rice flour blend based extruded snack. The effect of DFM fortification (7.5-20 %), moisture content of feed (14-20 %, wb), extruder barrel temperature (115-135 °C) and screw speed (300-330 RPM) on expansion ratio (ER), breaking strength (BS), overall acceptability (OAA) score and water solubility index (WSI) of extrudates were investigated using central composite rotatable design (CCRD). Significant regression models explained the effect of considered variables on all responses. DFM incorporation level was found to be most significant independent variable affecting on extrudates characteristics followed by extruder barrel temperature and then screw rpm. Feed moisture content did not affect extrudates characteristics. As DFM level increased (7.5 % to 20 %), ER and OAA value decreased. However, BS and WSI values were found to increase with increase in DFM level. Based on the defined criteria for numerical optimization, the combination for the production of DFM incorporated extruded snack with desired sensory attributes was achieved by incorporating 10 % DFM (replacing rice flour in flour blend) and by keeping 20 % moisture content, 312 screw rpm and 125 °C barrel temperature.

  4. Grouting design based on characterization of the fractured rock. Presentation and demonstration of a methodology

    Energy Technology Data Exchange (ETDEWEB)

    Fransson, Aasa (SWECO Environment, Stockholm (Sweden); Chalmers Univ. of Technology, Goeteborg (Sweden))

    2008-12-15

    The design methodology presented in this document is based on an approach that considers the individual fractures. The observations and analyses made during production enable the design to adapt to the encountered conditions. The document is based on previously published material and overview flow charts are used to show the different steps. Parts of or the full methodology has been applied for a number of tunneling experiments and projects. SKB projects in the Aespoe tunnel include a pillar experiment and pre-grouting of a 70 meter long tunnel (TASQ). Further, for Hallandsas railway tunnel (Skaane south Sweden), a field pre-grouting experiment and design and post-grouting of a section of 133 meters have been made. For the Nygard railway tunnel (north of Goeteborg, Sweden), design and grouting of a section of 86 meters (pre-grouting) and 60 meters (post-grouting) have been performed. Finally, grouting work at the Tornskog tunnel (Stockholm, Sweden) included design and grouting along a 100 meter long section of one of the two tunnel tubes. Of importance to consider when doing a design and evaluating the result are: - The identification of the extent of the grouting needed based on inflow requirements and estimates of tunnel inflow before grouting. - The selection of grout and performance of grouting materials including penetration ability and length. The penetration length is important for the fan geometry design. - The ungrouted compared to the grouted and excavated rock mass conditions: estimates of tunnel inflow and (if available) measured inflows after grouting and excavation. Identify if possible explanations for deviations. For the Hallandsas, Nygard and Tornskog tunnel sections, the use of a Pareto distribution and the estimate of tunnel inflow identified a need for sealing small aperture fractures (< 50 - 100 mum) to meet the inflow requirements. The tunneling projects show that using the hydraulic aperture as a basis for selection of grout is a good

  5. Grouting design based on characterization of the fractured rock. Presentation and demonstration of a methodology

    International Nuclear Information System (INIS)

    Fransson, Aasa

    2008-12-01

    The design methodology presented in this document is based on an approach that considers the individual fractures. The observations and analyses made during production enable the design to adapt to the encountered conditions. The document is based on previously published material and overview flow charts are used to show the different steps. Parts of or the full methodology has been applied for a number of tunneling experiments and projects. SKB projects in the Aespoe tunnel include a pillar experiment and pre-grouting of a 70 meter long tunnel (TASQ). Further, for Hallandsas railway tunnel (Skaane south Sweden), a field pre-grouting experiment and design and post-grouting of a section of 133 meters have been made. For the Nygard railway tunnel (north of Goeteborg, Sweden), design and grouting of a section of 86 meters (pre-grouting) and 60 meters (post-grouting) have been performed. Finally, grouting work at the Tornskog tunnel (Stockholm, Sweden) included design and grouting along a 100 meter long section of one of the two tunnel tubes. Of importance to consider when doing a design and evaluating the result are: - The identification of the extent of the grouting needed based on inflow requirements and estimates of tunnel inflow before grouting. - The selection of grout and performance of grouting materials including penetration ability and length. The penetration length is important for the fan geometry design. - The ungrouted compared to the grouted and excavated rock mass conditions: estimates of tunnel inflow and (if available) measured inflows after grouting and excavation. Identify if possible explanations for deviations. For the Hallandsas, Nygard and Tornskog tunnel sections, the use of a Pareto distribution and the estimate of tunnel inflow identified a need for sealing small aperture fractures (< 50 - 100 μm) to meet the inflow requirements. The tunneling projects show that using the hydraulic aperture as a basis for selection of grout is a good

  6. The physical vulnerability of elements at risk: a methodology based on fluid and classical mechanics

    Science.gov (United States)

    Mazzorana, B.; Fuchs, S.; Levaggi, L.

    2012-04-01

    The impacts of the flood events occurred in autumn 2011 in the Italian regions Liguria and Tuscany revived the engagement of the public decision makers to enhance in synergy flood control and land use planning. In this context, the design of efficient flood risk mitigation strategies and their subsequent implementation critically relies on a careful vulnerability analysis of both, the immobile and mobile elements at risk potentially exposed to flood hazards. Based on fluid and classical mechanics notions we developed computation schemes enabling for a dynamic vulnerability and risk analysis facing a broad typological variety of elements at risk. The methodological skeleton consists of (1) hydrodynamic computation of the time-varying flood intensities resulting for each element at risk in a succession of loading configurations; (2) modelling the mechanical response of the impacted elements through static, elasto-static and dynamic analyses; (3) characterising the mechanical response through proper structural damage variables and (4) economic valuation of the expected losses as a function of the quantified damage variables. From a computational perspective we coupled the description of the hydrodynamic flow behaviour and the induced structural modifications of the elements at risk exposed. Valuation methods, suitable to support a correct mapping from the value domains of the physical damage variables to the economic loss values are discussed. In such a way we target to complement from a methodological perspective the existing, mainly empirical, vulnerability and risk assessment approaches to refine the conceptual framework of the cost-benefit analysis. Moreover, we aim to support the design of effective flood risk mitigation strategies by diminishing the main criticalities within the systems prone to flood risk.

  7. Miedema model based methodology to predict amorphous-forming-composition range in binary and ternary systems

    Energy Technology Data Exchange (ETDEWEB)

    Das, N., E-mail: nirupamd@barc.gov.in [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Mittra, J. [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Murty, B.S. [Department of Metallurgical and Materials Engineering, IIT Madras, Chennai 600 036 (India); Pabi, S.K. [Department of Metallurgical and Materials Engineering, IIT Kharagpur, Kharagpur 721 302 (India); Kulkarni, U.D.; Dey, G.K. [Materials Science Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India)

    2013-02-15

    Highlights: Black-Right-Pointing-Pointer A methodology was proposed to predict amorphous forming compositions (AFCs). Black-Right-Pointing-Pointer Chemical contribution to enthalpy of mixing {proportional_to} enthalpy of amorphous for AFCs. Black-Right-Pointing-Pointer Accuracy in the prediction of AFC-range was noticed in Al-Ni-Ti system. Black-Right-Pointing-Pointer Mechanical alloying (MA) results of Al-Ni-Ti followed the predicted AFC-range. Black-Right-Pointing-Pointer Earlier MA results of Al-Ni-Ti also conformed to the predicted AFC-range. - Abstract: From the earlier works on the prediction of amorphous forming composition range (AFCR) using Miedema based model and also, on mechanical alloying experiments it has been observed that all amorphous forming compositions of a given alloy system falls within a linear band when the chemical contribution to enthalpy of the solid solution ({Delta}H{sup ss}) is plotted against the enthalpy of mixing in the amorphous phase ({Delta}H{sup amor}). On the basis of this observation, a methodology has been proposed in this article to identify the AFCR of a ternary system that is likely to be more precise than what can be obtained using {Delta}H{sup amor} - {Delta}H{sup ss} < 0 criterion. MA experiments on various compositions of Al-Ni-Ti system, producing amorphous, crystalline, and mixture of amorphous plus crystalline phases have been carried out and the phases have been characterized using X-ray diffraction and transmission electron microscopy techniques. Data from the present MA experiments and, also, from the literature have been used to validate the proposed approach. Also, the proximity of compositions, producing a mixture of amorphous and crystalline phases to the boundary of AFCR in the Al-Ni-Ti ternary has been found useful to validate the effectiveness of the prediction.

  8. Towards A Model-based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for elec- trolytic capacitors is presented. This methodology adopts a Kalman filter approach in conjunction with an...

  9. Towards A Model-Based Prognostics Methodology For Electrolytic Capacitors: A Case Study Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter...

  10. Best Practices for Mudweight Window Generation and Accuracy Assessment between Seismic Based Pore Pressure Prediction Methodologies for a Near-Salt Field in Mississippi Canyon, Gulf of Mexico

    Science.gov (United States)

    Mannon, Timothy Patrick, Jr.

    Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.

  11. Comprehensive Psychopathological Assessment Based on the Association for Methodology and Documentation in Psychiatry (AMDP) System: Development, Methodological Foundation, Application in Clinical Routine, and Research

    Science.gov (United States)

    Stieglitz, Rolf-Dieter; Haug, Achim; Fähndrich, Erdmann; Rösler, Michael; Trabert, Wolfgang

    2017-01-01

    The documentation of psychopathology is core to the clinical practice of the psychiatrist and clinical psychologist. However, both in initial as well as further training and specialization in their fields, this particular aspect of their work receives scanty attention only. Yet, for the past 50 years, the Association for Methodology and Documentation in Psychiatry (AMDP) System has been in existence and available as a tool to serve precisely the purpose of offering a systematic introduction to the terminology and documentation of psychopathology. The motivation for its development was based on the need for an assessment procedure for the reliable documentation of the effectiveness of newly developed psychopharmacological substances. Subsequently, the AMDP-System began to be applied in the context of investigations into a number of methodological issues in psychiatry (e.g., the frequency and specificity of particular symptoms, the comparison of rating scales). The System then became increasingly important also in clinical practice and, today, represents the most used instrument for the documentation of psychopathology in the German-speaking countries of Europe. This paper intends to offer an overview of the AMDP-System, its origins, design, and functionality. After an initial account of the history and development of the AMDP-System, the discussion will in turn focus on the System’s underlying methodological principles, the transfer of clinical skills and competencies in its practical application, and its use in research and clinical practice. Finally, potential future areas of development in relation to the AMDP-System are explored. PMID:28439242

  12. DEVELOPING FINAL COURSE MONOGRAPHS USING A TEAM-BASED LEARNING METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Ani Mari Hartz

    2016-04-01

    Full Text Available This article describes an experience with the Team-Based Learning (TBL methodology in courses designed to support the planning and execution of final course monographs. It contains both professors’ and students’ perceptions, through observation and assessment. A qualitative approach using observation techniques and desk research was used in conjunction with a quantitative approach based on a questionnaire. The sample consisted of 49 students from a higher education institution, 27 of them in a Communication Course and the remaining 22 in a Business Administration course. Qualitative data analysis was performed through simple categorization with back-defined categories, while the quantitative data analysis employed descriptive statistics and cluster analysis using Minitab 17.1 software. The main findings include the identification of: three student profiles (designated as traditional, collaborative and practical; a preference for guidance and feedback from the professor rather than other students; and a need for a professor-led closing discussion when applying the TBL method. As regards the main benefits to students, they recognized that discussion in groups allowed them to realize how much they really know about the subject studied. Finally, most students seemed to like the TBL approach.

  13. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    Science.gov (United States)

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  14. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  15. Heuristic evaluation of paper-based Web pages: a simplified inspection usability methodology.

    Science.gov (United States)

    Allen, Mureen; Currie, Leanne M; Bakken, Suzanne; Patel, Vimla L; Cimino, James J

    2006-08-01

    Online medical information, when presented to clinicians, must be well-organized and intuitive to use, so that the clinicians can conduct their daily work efficiently and without error. It is essential to actively seek to produce good user interfaces that are acceptable to the user. This paper describes the methodology used to develop a simplified heuristic evaluation (HE) suitable for the evaluation of screen shots of Web pages, the development of an HE instrument used to conduct the evaluation, and the results of the evaluation of the aforementioned screen shots. In addition, this paper presents examples of the process of categorizing problems identified by the HE and the technological solutions identified to resolve these problems. Four usability experts reviewed 18 paper-based screen shots and made a total of 108 comments. Each expert completed the task in about an hour. We were able to implement solutions to approximately 70% of the violations. Our study found that a heuristic evaluation using paper-based screen shots of a user interface was expeditious, inexpensive, and straightforward to implement.

  16. Three-dimensional design methodologies for tree-based FPGA architecture

    CERN Document Server

    Pangracious, Vinod; Mehrez, Habib

    2015-01-01

    This book focuses on the development of 3D design and implementation methodologies for Tree-based FPGA architecture. It also stresses the needs for new and augmented 3D CAD tools to support designs such as, the design for 3D, to manufacture high performance 3D integrated circuits and reconfigurable FPGA-based systems. This book was written as a text that covers the foundations of 3D integrated system design and FPGA architecture design. It was written for the use in an elective or core course at the graduate level in field of Electrical Engineering, Computer Engineering and Doctoral Research programs. No previous background on 3D integration is required, nevertheless fundamental understanding of 2D CMOS VLSI design is required. It is assumed that reader has taken the core curriculum in Electrical Engineering or Computer Engineering, with courses like CMOS VLSI design, Digital System Design and Microelectronics Circuits being the most important. It is accessible for self-study by both senior students and profe...

  17. Temperature-based estimation of global solar radiation using soft computing methodologies

    Science.gov (United States)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  18. A new methodology based on functional principal component analysis to study postural stability post-stroke.

    Science.gov (United States)

    Sánchez-Sánchez, M Luz; Belda-Lois, Juan-Manuel; Mena-Del Horno, Silvia; Viosca-Herrero, Enrique; Igual-Camacho, Celedonia; Gisbert-Morant, Beatriz

    2018-05-05

    A major goal in stroke rehabilitation is the establishment of more effective physical therapy techniques to recover postural stability. Functional Principal Component Analysis provides greater insight into recovery trends. However, when missing values exist, obtaining functional data presents some difficulties. The purpose of this study was to reveal an alternative technique for obtaining the Functional Principal Components without requiring the conversion to functional data beforehand and to investigate this methodology to determine the effect of specific physical therapy techniques in balance recovery trends in elderly subjects with hemiplegia post-stroke. A randomized controlled pilot trial was developed. Thirty inpatients post-stroke were included. Control and target groups were treated with the same conventional physical therapy protocol based on functional criteria, but specific techniques were added to the target group depending on the subjects' functional level. Postural stability during standing was quantified by posturography. The assessments were performed once a month from the moment the participants were able to stand up to six months post-stroke. The target group showed a significant improvement in postural control recovery trend six months after stroke that was not present in the control group. Some of the assessed parameters revealed significant differences between treatment groups (P Functional Principal Component Analysis to be performed when data is scarce. Moreover, it allowed the dynamics of recovery of two different treatment groups to be determined, showing that the techniques added in the target group increased postural stability compared to the base protocol. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul S.; Morgan, Keith S.; Caffrey, Michael P.

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.

  20. METHODOLOGICAL APPROACH TO ANALYSIS AND EVALUATION OF INFORMATION PROTECTION IN INFORMATION SYSTEMS BASED ON VULNERABILITY DANGER

    Directory of Open Access Journals (Sweden)

    Y. M. Krotiuk

    2008-01-01

    Full Text Available The paper considers a methodological approach to an analysis and estimation of information security in the information systems which is based on the analysis of vulnerabilities and an extent of their hazard. By vulnerability hazard it is meant a complexity of its operation as a part of an information system. The required and sufficient vulnerability operational conditions  have  been  determined in the paper. The paper proposes a generalized model for attack realization which is used as a basis for construction of an attack realization model for an operation of a particular vulnerability. A criterion for estimation of information protection in the information systems which is based on the estimation of vulnerability hazard is formulated in the paper. The proposed approach allows to obtain a quantitative estimation of the information system security on the basis of the proposed schemes on realization of typical attacks for the distinguished classes of vulnerabilities.The methodical approach is used for choosing variants to be applied for realization of protection mechanisms in the information systems as well as for estimation of information safety in the operating information systems.

  1. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    Science.gov (United States)

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  2. A Platform-Based Methodology for System-Level Mixed-Signal Design

    Directory of Open Access Journals (Sweden)

    Alberto Sangiovanni-Vincentelli

    2010-01-01

    Full Text Available The complexity of today's embedded electronic systems as well as their demanding performance and reliability requirements are such that their design can no longer be tackled with ad hoc techniques while still meeting tight time to-market constraints. In this paper, we present a system level design approach for electronic circuits, utilizing the platform-based design (PBD paradigm as the natural framework for mixed-domain design formalization. In PBD, a meet-in-the-middle approach allows systematic exploration of the design space through a series of top-down mapping of system constraints onto component feasibility models in a platform library, which is based on bottom-up characterizations. In this framework, new designs can be assembled from the precharacterized library components, giving the highest priority to design reuse, correct assembly, and efficient design flow from specifications to implementation. We apply concepts from design centering to enforce robustness to modeling errors as well as process, voltage, and temperature variations, which are currently plaguing embedded system design in deep-submicron technologies. The effectiveness of our methodology is finally shown on the design of a pipeline A/D converter and two receiver front-ends for UMTS and UWB communications.

  3. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Science.gov (United States)

    2010-10-01

    ... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220...-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data sources. The methodology for determining the per treatment base rate under the ESRD prospective payment system...

  4. Scalability of a Methodology for Generating Technical Trading Rules with GAPs Based on Risk-Return Adjustment and Incremental Training

    Science.gov (United States)

    de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.

    In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.

  5. Building a Cultural Heritage Corridor Based on Geodesign Theory and Methodology

    Directory of Open Access Journals (Sweden)

    Yang Chen

    Full Text Available ABSTRACT: Geodesign is a type of methodology that integrates dynamic environment modeling based on GIS with planning and design in order to support relevant decision making. It has substantially changed the dominant ways of thinking in planning and design, and has solved spatial issues relating to cultural and natural resources from a new perspective. Taking the Qionglai section of the Southern Silk Road as an example, the present study implemented geodesign theory and methods to investigate the technical approach to building a cultural heritage corridor based on GIS spatial analysis and overlay analysis.Firstly, we analyzed the various data layers of the cultural and natural features in the planning region. We organized all the data based on the principle of classification, organizing it into categories such as natural, cultural, and recreational data. Therefore, we defined the theme of the Southern Silk Road as a historical cultural heritage corridor. Secondly, based on the background, the heritage corridor boundary was defined according to its natural, cultural, and administrative spatial characteristics, with the three thematic boundaries overlaid in order to define a boundary location area covering about 852 square kilometers. Next, we divided all of the resources into three categories: natural heritage resources, cultural heritage resources, and intangible heritage resources and recreational spaces. The elements which could be used to build up the cultural heritage corridor were selected by evaluation and spatial analysis. In this way, we obtained some conclusive spatial information, such as element structures, the heritage density distribution, and the heritage number distribution. Finally, within the heritage boundary, we connected the tangible and intangible heritage resources to form various kinds of linear spaces, with the aim of obtaining the spatial pattern of the heritage corridor. KEYWORDS: Geodesign, heritage corridor, heritage

  6. Progressive design methodology for complex engineering systems based on multiobjective genetic algorithms and linguistic decision making

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    This work focuses on a design methodology that aids in design and development of complex engineering systems. This design methodology consists of simulation, optimization and decision making. Within this work a framework is presented in which modelling, multi-objective optimization and multi

  7. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  8. A Model-Based Methodology for Simultaneous Design and Control of a Bioethanol Production Process

    DEFF Research Database (Denmark)

    Alvarado-Morales, Merlin; Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan

    2010-01-01

    . The PGC methodology is used to generate more efficient separation designs in terms of energy consumption by targeting the separation task at the largest DF. Both methodologies are highlighted through the application of two case studies, a bioethanol production process and a succinic acid production...

  9. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    Science.gov (United States)

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  10. RANS based CFD methodology for a real scale 217-pin wire-wrapped fuel assembly of KAERI PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae-Ho, E-mail: jhjeong@kaeri.re.kr [Korea Atomic Energy Research Institute, 989-111 Daedeok-daero, Yuseoung-gu, Daejeon (Korea, Republic of); Song, Min-Seop [Department of Nuclear Engineering, Seoul National University, 559 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of); Lee, Kwi-Lim [Korea Atomic Energy Research Institute, 989-111 Daedeok-daero, Yuseoung-gu, Daejeon (Korea, Republic of)

    2017-03-15

    Highlights: • This paper presents a suitable way for a practical RANS based CFD methodology which is applicable to real scale 217-pin wire-wrapped fuel assembly of KAERI PGSFR. • A key point of differentiation of the RANS based CFD methodology in this study is adapting an innovative grid generation method using a fortran based in-house code with a GGI function in a general-purpose commercial CFD code, CFX. • The RANS based CFD methodology is implemented with high resolution scheme and SST turbulence model in the 7-pin 37-pin, and 127-pin wire-wrapped fuel assembly of PNC and JNC. Furthermore, the RANS based CFD methodology can be successfully extended to the real scale 217-pin wire-wrapped fuel bundles of KAERI PGSFR. • Three-dimensional thermal-hydraulic characteristics have been also investigated briefly. - Abstract: This paper presents a suitable way for a practical RANS (Reynolds Averaged Navier-Stokes simulation) based CFD (Computational Fluid Dynamics) methodology which is applicable to real scale 217-pin wire-wrapped fuel assembly of KAERI (Korea Atomic Energy Research Institute) PGSFR (Prototype Gen-IV Sodium-cooled Fast Reactor). The main purpose of the current study is to support license issue for the KAERI PGSFR core safety and to elucidate thermal-hydraulic characteristics in a 217-pin wire-wrapped fuel assembly of KAERI PGSFR. A key point of differentiation of the RANS based CFD methodology in this study is adapting an innovative grid generation method using a fortran based in-house code with a GGI (General Grid Interface) function in a general-purpose commercial CFD code, CFX. The innovative grid generation method with GGI function can achieve to simulate a real wire shape with minimizing cell skewness. The RANS based CFD methodology is implemented with high resolution scheme in convection term and SST (Shear Stress Transport) turbulence model in the 7-pin 37-pin, and 127-pin wire-wrapped fuel assembly of PNC (Power reactor and Nuclear fuel

  11. A game-based decision support methodology for competitive systems design

    Science.gov (United States)

    Briceno, Simon Ignacio

    This dissertation describes the development of a game-based methodology that facilitates the exploration and selection of research and development (R&D) projects under uncertain competitive scenarios. The proposed method provides an approach that analyzes competitor positioning and formulates response strategies to forecast the impact of technical design choices on a project's market performance. A critical decision in the conceptual design phase of propulsion systems is the selection of the best architecture, centerline, core size, and technology portfolio. This selection can be challenging when considering evolving requirements from both the airframe manufacturing company and the airlines in the market. Furthermore, the exceedingly high cost of core architecture development and its associated risk makes this strategic architecture decision the most important one for an engine company. Traditional conceptual design processes emphasize performance and affordability as their main objectives. These areas alone however, do not provide decision-makers with enough information as to how successful their engine will be in a competitive market. A key objective of this research is to examine how firm characteristics such as their relative differences in completing R&D projects, differences in the degree of substitutability between different project types, and first/second-mover advantages affect their product development strategies. Several quantitative methods are investigated that analyze business and engineering strategies concurrently. In particular, formulations based on the well-established mathematical field of game theory are introduced to obtain insights into the project selection problem. The use of game theory is explored in this research as a method to assist the selection process of R&D projects in the presence of imperfect market information. The proposed methodology focuses on two influential factors: the schedule uncertainty of project completion times and

  12. A methodology for texture feature-based quality assessment in nucleus segmentation of histopathology image

    Directory of Open Access Journals (Sweden)

    Si Wen

    2017-01-01

    Full Text Available Context: Image segmentation pipelines often are sensitive to algorithm input parameters. Algorithm parameters optimized for a set of images do not necessarily produce good-quality-segmentation results for other images. Even within an image, some regions may not be well segmented due to a number of factors, including multiple pieces of tissue with distinct characteristics, differences in staining of the tissue, normal versus tumor regions, and tumor heterogeneity. Evaluation of quality of segmentation results is an important step in image analysis. It is very labor intensive to do quality assessment manually with large image datasets because a whole-slide tissue image may have hundreds of thousands of nuclei. Semi-automatic mechanisms are needed to assist researchers and application developers to detect image regions with bad segmentations efficiently. Aims: Our goal is to develop and evaluate a machine-learning-based semi-automated workflow to assess quality of nucleus segmentation results in a large set of whole-slide tissue images. Methods: We propose a quality control methodology, in which machine-learning algorithms are trained with image intensity and texture features to produce a classification model. This model is applied to image patches in a whole-slide tissue image to predict the quality of nucleus segmentation in each patch. The training step of our methodology involves the selection and labeling of regions by a pathologist in a set of images to create the training dataset. The image regions are partitioned into patches. A set of intensity and texture features is computed for each patch. A classifier is trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is applied to each patch to predict the patch

  13. Novel high-throughput cell-based hybridoma screening methodology using the Celigo Image Cytometer.

    Science.gov (United States)

    Zhang, Haohai; Chan, Leo Li-Ying; Rice, William; Kassam, Nasim; Longhi, Maria Serena; Zhao, Haitao; Robson, Simon C; Gao, Wenda; Wu, Yan

    2017-08-01

    Hybridoma screening is a critical step for antibody discovery, which necessitates prompt identification of potential clones from hundreds to thousands of hybridoma cultures against the desired immunogen. Technical issues associated with ELISA- and flow cytometry-based screening limit accuracy and diminish high-throughput capability, increasing time and cost. Conventional ELISA screening with coated antigen is also impractical for difficult-to-express hydrophobic membrane antigens or multi-chain protein complexes. Here, we demonstrate novel high-throughput screening methodology employing the Celigo Image Cytometer, which avoids nonspecific signals by contrasting antibody binding signals directly on living cells, with and without recombinant antigen expression. The image cytometry-based high-throughput screening method was optimized by detecting the binding of hybridoma supernatants to the recombinant antigen CD39 expressed on Chinese hamster ovary (CHO) cells. Next, the sensitivity of the image cytometer was demonstrated by serial dilution of purified CD39 antibody. Celigo was used to measure antibody affinities of commercial and in-house antibodies to membrane-bound CD39. This cell-based screening procedure can be completely accomplished within one day, significantly improving throughput and efficiency of hybridoma screening. Furthermore, measuring direct antibody binding to living cells eliminated both false positive and false negative hits. The image cytometry method was highly sensitive and versatile, and could detect positive antibody in supernatants at concentrations as low as ~5ng/mL, with concurrent K d binding affinity coefficient determination. We propose that this screening method will greatly facilitate antibody discovery and screening technologies. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    Science.gov (United States)

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  15. Approaches for the Assessment of the Innovative Nuclear System of Ukraine on the Base of INPRO Methodology

    International Nuclear Information System (INIS)

    Afanas'ev, A.A.; Vlasenko, N.I.

    2007-01-01

    Approaches for the preliminary and comparative assessment of Innovative Nuclear System (INS) of Ukraine using INPRO methodology (IAEA TECDOC-1434) suggested for the period up to 2030, which must answer the comprehensive purpose of sustainable development, contribute to strengthening of the non-proliferation principles and solving an energy problems supply on national and regional levels are presented in the paper. Using assessment results of the INS based on evolutionary designs will allow Ukraine to build informative, methodological and technical basis for choice of the INS based on innovative design which could be offered for deployment in Ukraine after 1030

  16. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data.

    Science.gov (United States)

    Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe

    2018-01-17

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.

  17. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data

    Science.gov (United States)

    Vanegas, Fernando; Weiss, John; Gonzalez, Felipe

    2018-01-01

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used—the sensors, the UAV, and the flight operations—the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analysing and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications. PMID:29342101

  18. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  19. Fast Lemons and Sour Boulders: Testing Crossmodal Correspondences Using an Internet-Based Testing Methodology

    Directory of Open Access Journals (Sweden)

    Andy T. Woods

    2013-09-01

    Full Text Available According to a popular family of hypotheses, crossmodal matches between distinct features hold because they correspond to the same polarity on several conceptual dimensions (such as active–passive, good–bad, etc. that can be identified using the semantic differential technique. The main problem here resides in turning this hypothesis into testable empirical predictions. In the present study, we outline a series of plausible consequences of the hypothesis and test a variety of well-established and previously untested crossmodal correspondences by means of a novel internet-based testing methodology. The results highlight that the semantic hypothesis cannot easily explain differences in the prevalence of crossmodal associations built on the same semantic pattern (fast lemons, slow prunes, sour boulders, heavy red; furthermore, the semantic hypothesis only minimally predicts what happens when the semantic dimensions and polarities that are supposed to drive such crossmodal associations are made more salient (e.g., by adding emotional cues that ought to make the good/bad dimension more salient; finally, the semantic hypothesis does not explain why reliable matches are no longer observed once intramodal dimensions with congruent connotations are presented (e.g., visually presented shapes and colour do not appear to correspond.

  20. Predicting losing and gaining river reaches in lowland New Zealand based on a statistical methodology

    Science.gov (United States)

    Yang, Jing; Zammit, Christian; Dudley, Bruce

    2017-04-01

    The phenomenon of losing and gaining in rivers normally takes place in lowland where often there are various, sometimes conflicting uses for water resources, e.g., agriculture, industry, recreation, and maintenance of ecosystem function. To better support water allocation decisions, it is crucial to understand the location and seasonal dynamics of these losses and gains. We present a statistical methodology to predict losing and gaining river reaches in New Zealand based on 1) information surveys with surface water and groundwater experts from regional government, 2) A collection of river/watershed characteristics, including climate, soil and hydrogeologic information, and 3) the random forests technique. The surveys on losing and gaining reaches were conducted face-to-face at 16 New Zealand regional government authorities, and climate, soil, river geometry, and hydrogeologic data from various sources were collected and compiled to represent river/watershed characteristics. The random forests technique was used to build up the statistical relationship between river reach status (gain and loss) and river/watershed characteristics, and then to predict for river reaches at Strahler order one without prior losing and gaining information. Results show that the model has a classification error of around 10% for "gain" and "loss". The results will assist further research, and water allocation decisions in lowland New Zealand.

  1. Development of a risk monitoring system for nuclear power plants based on GO-FLOW methodology

    International Nuclear Information System (INIS)

    Yang, Jun; Yang, Ming; Yoshikawa, Hidekazu; Yang, Fangqing

    2014-01-01

    Highlights: • A method for developing Living PSA is proposed. • Living PSA is easy to update with online modification to system model file. • A risk monitoring system is designed and developed using the GO-FLOW. • The risk monitoring system is useful for plant daily operation risk management. - Abstract: The paper presents a risk monitoring system developed based on GO-FLOW methodology which is a success-oriented system reliability modeling technique for phased mission as well as time-dependent problems analysis. The risk monitoring system is designed to receive information on plant configuration changes either from equipment failures, operator interventions, or maintenance activities, then update the Living PSA model with online modification to the system GO-FLOW model file which contains all the functional modes of equipment represented by a proposed generalized GO-FLOW modeling structure, and display risk values graphically. The risk monitoring system can be used to assist safety engineers and plant operators in their maintenance management and daily operation risk management at NPPs

  2. Development of a skeletal multi-component fuel reaction mechanism based on decoupling methodology

    International Nuclear Information System (INIS)

    Mohan, Balaji; Tay, Kun Lin; Yang, Wenming; Chua, Kian Jon

    2015-01-01

    Highlights: • A compact multi-component skeletal reaction mechanism was developed. • Combined bio-diesel and PRF mechanism was proposed. • The mechanism consists of 68 species and 183 reactions. • Well validated against ignition delay times, flame speed and engine results. - Abstract: A new coupled bio-diesel surrogate and primary reference fuel (PRF) oxidation skeletal mechanism has been developed. The bio-diesel surrogate sub-mechanism consists of oxidation sub-mechanisms of Methyl decanoate (MD), Methyl 9-decenoate (MD9D) and n-Heptane fuel components. The MD and MD9D are chosen to represent the saturated and unsaturated methyl esters respectively in bio-diesel fuels. Then, a reduced iso-Octane oxidation sub-mechanism is added to the bio-diesel surrogate sub-mechanism. Then, all the sub-mechanisms are integrated to a reduced C_2–C_3 mechanism, detailed H_2/CO/C_1 mechanism and reduced NO_x mechanism based on decoupling methodology. The final mechanism consisted of 68 species and 183 reactions. The mechanism was well validated with shock-tube ignition delay times, laminar flame speed and 3D engine simulations.

  3. A RISK BASED METHODOLOGY TO ASSESS THE ENERGY EFFICIENCY IMPROVEMENTS IN TRADITIONALLY CONSTRUCTED BUILDINGS

    Directory of Open Access Journals (Sweden)

    D. Herrera

    2013-07-01

    Full Text Available In order to achieve the CO2 reduction targets set by the Scottish government, it will be necessary to improve the energy efficiency of existing buildings. Within the total Scottish building stock, historic and traditionally constructed buildings are an important proportion, in the order of 19 % (Curtis, 2010, and represent cultural, emotional and identity values that should be protected. However, retrofit interventions could be a complex operation because of the several aspects that are involved in the hygrothermal performance of traditional buildings. Moreover, all these factors interact with each other and therefore need to be analysed as a whole. Upgrading the envelope of traditional buildings may produce severe changes to the moisture migration leading to superficial or interstitial condensation and thus fabric decay and mould growth. Retrofit projects carried out in the past have failed because of the misunderstanding, or the lack of expert prediction, of the potential consequences associated to the envelope's alteration. The evaluation of potential risks, prior to any alteration on building's physics in order to improve its energy efficiency, is critical to avoid future damage on the wall's performance or occupants' health and well being. The aim of this PhD research project is to point out the most critical aspects related to the energy efficiency improvement of traditional buildings and to develop a risk based methodology that helps owners and practitioners during the decision making process.

  4. Development of a risk monitoring system for nuclear power plants based on GO-FLOW methodology

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jun, E-mail: youngjun51@hotmail.com [College of Nuclear Science and Technology, Harbin Engineering University, No. 145 Nantong Street, Nangang District, Harbin 150001 (China); Yang, Ming, E-mail: yangming@hrbeu.edu.cn [College of Nuclear Science and Technology, Harbin Engineering University, No. 145 Nantong Street, Nangang District, Harbin 150001 (China); Yoshikawa, Hidekazu, E-mail: yosikawa@kib.biglobe.ne.jp [Symbio Community Forum, Kyoto (Japan); Yang, Fangqing, E-mail: yfq613@163.com [China Nuclear Power Technology Research Institute, 518000 (China)

    2014-10-15

    Highlights: • A method for developing Living PSA is proposed. • Living PSA is easy to update with online modification to system model file. • A risk monitoring system is designed and developed using the GO-FLOW. • The risk monitoring system is useful for plant daily operation risk management. - Abstract: The paper presents a risk monitoring system developed based on GO-FLOW methodology which is a success-oriented system reliability modeling technique for phased mission as well as time-dependent problems analysis. The risk monitoring system is designed to receive information on plant configuration changes either from equipment failures, operator interventions, or maintenance activities, then update the Living PSA model with online modification to the system GO-FLOW model file which contains all the functional modes of equipment represented by a proposed generalized GO-FLOW modeling structure, and display risk values graphically. The risk monitoring system can be used to assist safety engineers and plant operators in their maintenance management and daily operation risk management at NPPs.

  5. Measuring resource inequalities. The concepts and methodology for an area-based Gini coefficient

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Although inequalities in income and expenditure are relatively well researched, comparatively little attention has been paid, to date, to inequalities in resource use. This is clearly a shortcoming when it comes to developing informed policies for sustainable consumption and social justice. This paper describes an indicator of inequality in resource use called the AR-Gini. The AR-Gini is an area-based measure of resource inequality that estimates inequalities between neighbourhoods with regard to the consumption of specific consumer goods. It is also capable of estimating inequalities in the emissions resulting from resource use, such as carbon dioxide emissions from energy use, and solid waste arisings from material resource use. The indicator is designed to be used as a basis for broadening the discussion concerning 'food deserts' to inequalities in other types of resource use. By estimating the AR-Gini for a wide range of goods and services we aim to enhance our understanding of resource inequalities and their drivers, identify which resources have highest inequalities, and to explore trends in inequalities. The paper describes the concepts underlying the construction of the AR-Gini and its methodology. Its use is illustrated by pilot applications (specifically, men's and boys' clothing, carpets, refrigerators/freezers and clothes washer/driers). The results illustrate that different levels of inequality are associated with different commodities. The paper concludes with a brief discussion of some possible policy implications of the AR-Gini. (author)

  6. Performance-based, cost- and time-effective PCB analytical methodology

    International Nuclear Information System (INIS)

    Alvarado, J. S.

    1998-01-01

    Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the new sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval

  7. Developing More Insights on Sustainable Consumption in China Based on Q Methodology

    Directory of Open Access Journals (Sweden)

    Ying Qu

    2015-10-01

    Full Text Available Being an important aspect of sustainable development, sustainable consumption has attracted great attention among Chinese politicians and academia, and Chinese governments have established policies that encourage sustainable consumption behaviors. However, unsustainable consumption behavior still remains predominant in China. This paper aims to classify consumers with similar traits, in terms of the characteristics of practicing sustainable consumption, into one group, so that their traits can be clearly understood, to enable governments to establish pointed policies for different groups of consumers. Q methodology, generally used to reveal the subjectivity of human beings involved in any situation, is applied in this paper to classify Chinese consumers based on Q sample design and data collection and analysis. Next, the traits of each group are analyzed in detail and comparison analyses are also conducted to compare the common and differentiating factors among the three groups. The results show that Chinese consumers can be classified into three groups: sustainable (Group 1, potential sustainable (Group 2 and unsustainable consumers (Group 3, according to their values and attitudes towards sustainable consumption. As such, Group 1 cares for the environment and has strong environmental values. They understand sustainable consumption and its functions. Group 2 needs more enlightenments and external stimuli to motivate them to consume sustainably. Group 3 needs to be informed about and educated on sustainable consumption to enable them to change their consumption behavior from unsustainable to sustainable. Suggestions and implications of encouraging each group of consumers to engage in sustainable consumption are also provided.

  8. Measuring Effectiveness in Digital Game-Based Learning: A Methodological Review.

    Directory of Open Access Journals (Sweden)

    Anissa All

    2014-06-01

    Full Text Available In recent years, a growing number of studies are being conducted into the effectiveness of digital game-based learning (DGBL. Despite this growing interest, there is a lack of sound empirical evidence on the effectiveness of DGBL due to different outcome measures for assessing effectiveness, varying methods of data collection and inconclusive or difficult to interpret results. This has resulted in a need for an overarching methodology for assessing the effectiveness of DGBL. The present study took a first step in this direction by mapping current methods used for assessing the effectiveness of DGBL. Results showed that currently, comparison of results across studies and thus looking at effectiveness of DGBL on a more general level is problematic due to diversity in and suboptimal study designs. Variety in study design relates to three issues, namely different activities that are implemented in the control groups, different measures for assessing the effectiveness of DGBL and the use of different statistical techniques for analyzing learning outcomes. Suboptimal study designs are the result of variables confounding study results. Possible confounds that were brought forward in this review are elements that are added to the game as part of the educational intervention (e.g., required reading, debriefing session, instructor influences and practice effects when using the same test pre- and post-intervention. Lastly, incomplete information on the study design impedes replication of studies and thus falsification of study results.

  9. Finite Element Based Response Surface Methodology to Optimize Segmental Tunnel Lining

    Directory of Open Access Journals (Sweden)

    A. Rastbood

    2017-04-01

    Full Text Available The main objective of this paper is to optimize the geometrical and engineering characteristics of concrete segments of tunnel lining using Finite Element (FE based Response Surface Methodology (RSM. Input data for RSM statistical analysis were obtained using FEM. In RSM analysis, thickness (t and elasticity modulus of concrete segments (E, tunnel height (H, horizontal to vertical stress ratio (K and position of key segment in tunnel lining ring (θ were considered as input independent variables. Maximum values of Mises and Tresca stresses and tunnel ring displacement (UMAX were set as responses. Analysis of variance (ANOVA was carried out to investigate the influence of each input variable on the responses. Second-order polynomial equations in terms of influencing input variables were obtained for each response. It was found that elasticity modulus and key segment position variables were not included in yield stresses and ring displacement equations, and only tunnel height and stress ratio variables were included in ring displacement equation. Finally optimization analysis of tunnel lining ring was performed. Due to absence of elasticity modulus and key segment position variables in equations, their values were kept to average level and other variables were floated in related ranges. Response parameters were set to minimum. It was concluded that to obtain optimum values for responses, ring thickness and tunnel height must be near to their maximum and minimum values, respectively and ground state must be similar to hydrostatic conditions.

  10. METHODOLOGICAL BASES OF THE OPTIMIZATION OF ORGANIZATIONAL MANAGEMENT STRUCTURE AT IMPLEMENTING THE MAJOR CONSTRUCTION ENTERPRISE STRATEGY

    Directory of Open Access Journals (Sweden)

    Rodionova Svetlana Vladimirovna

    2015-09-01

    Full Text Available Planning and implementation of innovations on the microlevel of management and on the higher levels is a process of innovative projects portfolio implementation. Project management is aimed at some goal; therefore, defining the mission and aims of implementation is of primary importance. These are the part of the notion of development strategy of an enterprise. Creating a strategy for big construction holding companies is complicated by the necessity to account for different factors effecting each business-block and subsidiary companies. The authors specify an algorithm of development and implementation of the activity strategy of a big construction enterprise. A special importance of the correspondence of organizational management structure to the implemented strategy is shown. The innovative character of organizational structure change is justified. The authors offer methods to optimize the organizational management structure based on communication approach with the use of the elements graph theory. The offered methodological provisions are tested on the example of the Russian JSC “RZhDstroy”.

  11. U.S. Natural Gas Storage Risk-Based Ranking Methodology and Results

    Energy Technology Data Exchange (ETDEWEB)

    Folga, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Portante, Edgar [Argonne National Lab. (ANL), Argonne, IL (United States); Shamsuddin, Shabbir [Argonne National Lab. (ANL), Argonne, IL (United States); Tompkins, Angeli [Argonne National Lab. (ANL), Argonne, IL (United States); Talaber, Leah [Argonne National Lab. (ANL), Argonne, IL (United States); McLamore, Mike [Argonne National Lab. (ANL), Argonne, IL (United States); Kavicky, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Conzelmann, Guenter [Argonne National Lab. (ANL), Argonne, IL (United States); Levin, Todd [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-01

    This report summarizes the methodology and models developed to assess the risk to energy delivery from the potential loss of underground gas storage (UGS) facilities located within the United States. The U.S. has a total of 418 existing storage fields, of which 390 are currently active. The models estimate the impacts of a disruption of each of the active UGS facilities on their owners/operators, including (1) local distribution companies (LDCs), (2) directly connected transporting pipelines and thus on the customers in downstream States, and (3) third-party entities and thus on contracted customers expecting the gas shipment. Impacts are measured across all natural gas customer classes. For the electric sector, impacts are quantified in terms of natural gas-fired electric generation capacity potentially affected from the loss of a UGS facility. For the purpose of calculating the overall supply risk, the overall consequence of the disruption of an UGS facility across all customer classes is expressed in terms of the number of expected equivalent residential customer outages per year, which combines the unit business interruption cost per customer class and the estimated number of affected natural gas customers with estimated probabilities of UGS disruptions. All models and analyses are based on publicly available data. The report presents a set of findings and recommendations in terms of data, further analyses, regulatory requirements and standards, and needs to improve gas/electric industry coordination for electric reliability.

  12. Application of backtracking algorithm to depletion calculations

    International Nuclear Information System (INIS)

    Wu Mingyu; Wang Shixi; Yang Yong; Zhang Qiang; Yang Jiayin

    2013-01-01

    Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)

  13. Ego Depletion Impairs Implicit Learning

    Science.gov (United States)

    Thompson, Kelsey R.; Sanchez, Daniel J.; Wesley, Abigail H.; Reber, Paul J.

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent. PMID:25275517

  14. Hsp90 depletion goes wild

    OpenAIRE

    Siegal, Mark L; Masel, Joanna

    2012-01-01

    Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...

  15. Ego depletion impairs implicit learning.

    Directory of Open Access Journals (Sweden)

    Kelsey R Thompson

    Full Text Available Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  16. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  17. Methodological Bases for Ranking the European Union Countries in Terms of Macroeconomic Security

    Directory of Open Access Journals (Sweden)

    Tymoshenko Olena V.

    2015-11-01

    Full Text Available The fundamental contradictions of existing methodical approaches to assessing the level of the state economic security have been substantiated and proposals on the introduction of a unified methodology for its assessment, which would be acceptable for use at the international level or for a specific cluster of countries, have been developed. Based on the conducted researches it has been found that the there are no unified signs for such classification of countries. To determine the most significant coefficients and critical values of the indicators of economic security, it is appropriate that the countries should be grouped in terms of the level of the economic development proposed by the UN Commission and the IMF. Analysis of the economic security level has been conducted for the countries-members of the European Union as a separate cluster of countries on the example of macroeconomic security indicators. Based on the evaluation it has been found that the proposed list of indicators and their critical values is economically sound and built on the principle of adequacy, representativeness and comprehensiveness. In 2004 the most secure countries of the EU corresponding to the macroeconomic security standards were Austria, Denmark, Sweden, Finland, and as in 2014 the percentage of absolutely secure countries decreased from 14.3 to 7.1%, only Denmark and Sweden remained in the ranking. During the analyzed period Bulgaria and Croatia got into the risk zone, Estonia, Lithuania, Latvia, Romania were in a danger zone. In 2014 Ukraine in terms of its macroeconomic security was in a critical state, which testified about serious structural and system imbalances in its development.

  18. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support.

    Science.gov (United States)

    Proctor, Enola; Luke, Douglas; Calhoun, Annaliese; McMillen, Curtis; Brownson, Ross; McCrary, Stacey; Padek, Margaret

    2015-06-11

    Little is known about how well or under what conditions health innovations are sustained and their gains maintained once they are put into practice. Implementation science typically focuses on uptake by early adopters of one healthcare innovation at a time. The later-stage challenges of scaling up and sustaining evidence-supported interventions receive too little attention. This project identifies the challenges associated with sustainability research and generates recommendations for accelerating and strengthening this work. A multi-method, multi-stage approach, was used: (1) identifying and recruiting experts in sustainability as participants, (2) conducting research on sustainability using concept mapping, (3) action planning during an intensive working conference of sustainability experts to expand the concept mapping quantitative results, and (4) consolidating results into a set of recommendations for research, methodological advances, and infrastructure building to advance understanding of sustainability. Participants comprised researchers, funders, and leaders in health, mental health, and public health with shared interest in the sustainability of evidence-based health care. Prompted to identify important issues for sustainability research, participants generated 91 distinct statements, for which a concept mapping process produced 11 conceptually distinct clusters. During the conference, participants built upon the concept mapping clusters to generate recommendations for sustainability research. The recommendations fell into three domains: (1) pursue high priority research questions as a unified agenda on sustainability; (2) advance methods for sustainability research; (3) advance infrastructure to support sustainability research. Implementation science needs to pursue later-stage translation research questions required for population impact. Priorities include conceptual consistency and operational clarity for measuring sustainability, developing evidence

  19. Equity portfolio optimization: A DEA based methodology applied to the Zagreb Stock Exchange

    Directory of Open Access Journals (Sweden)

    Margareta Gardijan

    2015-10-01

    Full Text Available Most strategies for selection portfolios focus on utilizing solely market data and implicitly assume that stock markets communicate all relevant information to all market stakeholders, and that these markets cannot be influenced by investor activities. However convenient, this is a limited approach, especially when applied to small and illiquid markets such as the Croatian market, where such assumptions are hardly realistic. Thus, there is a demand for including other sources of data, such as financial reports. Research poses the question of whether financial ratios as criteria for stock selection are of any use to Croatian investors. Financial and market data from selected publicly companies listed on the Croatian capital market are used. A two-stage portfolio selection strategy is applied, where the first stage involves selecting stocks based on the respective Data Envelopment Analysis (DEA efficiency scores. DEA models are becoming popular in stock portfolio selection given that the methodology includes numerous models that provide a great flexibility in selecting inputs and outputs, which in turn are considered as criteria for portfolio selection. Accordingly, there is much room for improvement of the current proposed strategies for selecting portfolios. In the second stage, two portfolio-weighting strategies are applied using equal proportions and score-weighting. To show whether these strategies create outstanding out–of–sample portfolios in time, time-dependent DEA Window Analysis is applied using a reference time of one year, and portfolio returns are compared with the market portfolio for each period. It is found that the financial data are a significant indicator of the future performance of a stock and a DEA-based portfolio strategy outperforms market return.

  20. Dental Students' Perceived Clinical Competence in Prosthodontics: Comparison of Traditional and Problem-Based Learning Methodologies.

    Science.gov (United States)

    Montero, Javier; Dib, Abraham; Guadilla, Yasmina; Flores, Javier; Santos, Juan Antonio; Aguilar, Rosa Anaya; Gómez-Polo, Cristina

    2018-02-01

    The aim of this study was to compare the perceived competence for treating prosthodontic patients of two samples of fourth-year dental students: those educated using traditional methodologies and those educated using problem-based learning (PBL). Two cohorts of fourth-year dental students at a dental school in Spain were surveyed: the traditional methods cohort (n=46) was comprised of all students in academic years 2012 and 2013, and the PBL cohort (n=57) was comprised of all students in academic years 2014 and 2015. Students in both cohorts reported the number of prosthodontic treatments they carried out per year and their perceived level of competence in performing such treatments. The results showed that the average number of treatments performed was similar for the two cohorts, except the number of metal-based removable partial dentures was significantly higher for students in the traditional (0.8±1.0) than the PBL (0.4±0.6) cohort. The level of perceived competence to treat complete denture patients for the combined cohorts was significantly higher (7.3±1.1) than that for partial acrylic dentures (6.7±1.5) and combined dentures (5.7±1.3). Students' clinical competence in prosthodontics mainly depended on number of treatments performed as the operator as well as the assistant. Students in the traditional methods cohort considered themselves to be significantly more competent at treating patients for removable partial and fixed prostheses (7.8±1.1 and 7.6±1.1, respectively) than did students in the PBL cohort (6.4±1.5 and 6.6±1.5, respectively). Overall, however, the study found that practical experiences were more important than the teaching method used to achieve students' perceived competence.

  1. Combined methodology of optimization and life cycle inventory for a biomass gasification based BCHP system

    International Nuclear Information System (INIS)

    Wang, Jiang-Jiang; Yang, Kun; Xu, Zi-Long; Fu, Chao; Li, Li; Zhou, Zun-Kai

    2014-01-01

    Biomass gasification based building cooling, heating, and power (BCHP) system is an effective distributed energy system to improve the utilization of biomass resources. This paper proposes a combined methodology of optimization method and life cycle inventory (LCI) for the biomass gasification based BCHP system. The life cycle models including biomass planting, biomass collection-storage-transportation, BCHP plant construction and operation, and BCHP plant demolition and recycle, are constructed to obtain economic cost, energy consumption and CO 2 emission in the whole service-life. Then, the optimization model for the biomass BCHP system including variables, objective function and solution method are presented. Finally, a biomass BCHP case in Harbin, China, is optimized under different optimization objectives, the life-cycle performances including cost, energy and CO 2 emission are obtained and the grey incidence approach is employed to evaluate their comprehensive performances of the biomass BCHP schemes. The results indicate that the life-cycle cost, energy efficiency and CO 2 emission of the biomass BCHP system are about 41.9 $ MWh −1 , 41% and 59.60 kg MWh −1 respectively. The optimized biomass BCHP configuration to minimize the life-cycle cost is the best scheme to achieve comprehensive benefit including cost, energy consumption, renewable energy ratio, steel consumption, and CO 2 emission. - Highlights: • Propose the combined method of optimization and LCI for biomass BCHP system. • Optimize the biomass BCHP system to minimize the life-cycle cost, energy and emission. • Obtain the optimized life-cycle cost, energy efficiency and CO 2 emission. • Select the best biomass BCHP scheme using grey incidence approach

  2. A Performance-Based Technology Assessment Methodology to Support DoD Acquisition

    National Research Council Canada - National Science Library

    Mahafza, Sherry; Componation, Paul; Tippett, Donald

    2005-01-01

    .... This methodology is referred to as Technology Performance Risk Index (TPRI). The TPRI can track technology readiness through a life cycle, or it can be used at a specific time to support a particular system milestone decision...

  3. A Capacitance-Based Methodology for the Estimation of Piezoelectric Coefficients of Poled Piezoelectric Materials

    KAUST Repository

    Al Ahmad, Mahmoud; Alshareef, Husam N.

    2010-01-01

    A methodology is proposed to estimate the piezoelectric coefficients of bulk piezoelectric materials using simple capacitance measurements. The extracted values of d33 and d31 from the capacitance measurements were 506 pC/N and 247 p

  4. Model-based Organization Manning, Strategy, and Structure Design via Team Optimal Design (TOD) Methodology

    National Research Council Canada - National Science Library

    Levchuk, Georgiy; Chopra, Kari; Paley, Michael; Levchuk, Yuri; Clark, David

    2005-01-01

    This paper describes a quantitative Team Optimal Design (TOD) methodology and its application to the design of optimized manning for E-10 Multi-sensor Command and Control Aircraft. The E-10 (USAF, 2002...

  5. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    of combined sewer overflow. The GLUE methodology is used to test different conceptual setups in order to determine if one model setup gives a better goodness of fit conditional on the observations than the other. Moreover, different methodological investigations of GLUE are conducted in order to test......In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  6. Generalized perturbation theory for LWR depletion analysis and core design applications

    International Nuclear Information System (INIS)

    White, J.R.; Frank, B.R.

    1986-01-01

    A comprehensive time-dependent perturbation theory formulation that includes macroscopic depletion, thermal-hydraulic and poison feedback effects, and a criticality reset mechanism is developed. The methodology is compatible with most current LWR design codes. This new development allows GTP/DTP methods to be used quantitatively in a variety of realistic LWR physics applications that were not possible prior to this work. A GTP-based optimization technique for incore fuel management analyses is addressed as a promising application of the new formulation

  7. Application of a new methodology to evaluate Dnb limits based on statistical propagation of uncertainties

    International Nuclear Information System (INIS)

    Machado, Marcio Dornellas

    1998-09-01

    One of the most important thermalhydraulics safety parameters is the DNBR (Departure from Nucleate Boiling Ratio). The current methodology in use at Eletronuclear to determine DNBR is extremely conservative and may result in penalties to the reactor power due to an increase plugging level of steam generator tubes. This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard methodology (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRA IIC/MIT code, modified to Angra 1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which take most unfavorable values in the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author)

  8. A Methodology for Equitable Performance Assessment and Presentation of Wave Energy Converters Based on Sea Trials

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Pecher, Arthur; Margheritini, Lucia

    2013-01-01

    This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests are p...... parameters influence the performance of the WEC can also be investigated using this methodology.......This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests...... leads to testing campaigns that are not as extensive as desired. Therefore, the performance analysis should be robust enough to allow for not fully complete sea trials and sub optimal performance data. In other words, this methodology is focused at retrieving the maximum amount of useful information out...

  9. Transient Treg depletion enhances therapeutic anti‐cancer vaccination

    Science.gov (United States)

    Aston, Wayne J.; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L.; Solin, Jessica N.; Ma, Shaokang; Lesterhuis, W. Joost; Dick, Ian; Holt, Robert A.; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A.

    2016-01-01

    Abstract Introduction Regulatory T cells (Treg) play an important role in suppressing anti‐ immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti‐cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Methods Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. Results DTX specifically depleted Treg in a transient, dose‐dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor‐peptide vaccination. Conclusions BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti‐tumor immunity. DTX‐mediated Treg depletion is transient, dose‐dependent, and leads to strong anti‐tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor‐specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies. PMID:28250921

  10. [Acute tryptophan depletion in eating disorders].

    Science.gov (United States)

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  11. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.

    Science.gov (United States)

    Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  12. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm

    Directory of Open Access Journals (Sweden)

    Madeleine M. Arber

    2017-09-01

    Full Text Available Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration. Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  13. The enhancements and testing for the MCNPX depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; Hendricks, J. S.; Anghaie, S.

    2008-01-01

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model true system physics and better track the evolution of temporal nuclide inventory by simulating the actual physical process. The integration of INDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte- Carlo-linked depletion capability in a single Monte Carlo code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. We describe here the depletion methodology dating from the original linking of MONTEBURNS and MCNP to the first public release of the integrated capability (MCNPX 2. 6.B, June, 2006) that has been reported previously. Then we further detail the many new depletion capability enhancements since then leading to the present capability. The H.B. Robinson benchmark calculation results are also reported. The new MCNPX depletion capability enhancements include: (1) allowing the modeling of as large a system as computer memory capacity permits; (2) tracking every fission product available in ENDF/B VII. 0; (3) enabling depletion in repeated structures geometries such as repeated arrays of fuel pins; (4) including metastable isotopes in burnup; and (5) manually changing the concentrations of key isotopes during different time steps to simulate changing reactor control conditions such as dilution of poisons to maintain criticality during burnup. These enhancements allow better detail to model the true system physics and also improve the robustness of the capability. The H.B. Robinson benchmark calculation was completed in order to determine the accuracy of the depletion solution. Temporal nuclide computations of key actinide and fission products are compared to the results of other

  14. Depleted uranium plasma reduction system study

    International Nuclear Information System (INIS)

    Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.

    1994-12-01

    A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessary processing equipment and facilities to perform the process. The study has identified total processing cost of approximately $3.00/kg of UF 6 processed. Based on the results of this study, the development of a laboratory-scale system (1 kg/h throughput of UF6) is warranted. Further scaling of the process to pilot scale will be determined after laboratory testing is complete

  15. ANL calculational methodologies for determining spent nuclear fuel source term

    International Nuclear Information System (INIS)

    McKnight, R. D.

    2000-01-01

    Over the last decade Argonne National Laboratory has developed reactor depletion methods and models to determine radionuclide inventories of irradiated EBR-II fuels. Predicted masses based on these calculational methodologies have been validated using available data from destructive measurements--first from measurements of lead EBR-II experimental test assemblies and later using data obtained from processing irradiated EBR-II fuel assemblies in the Fuel Conditioning Facility. Details of these generic methodologies are described herein. Validation results demonstrate these methods meet the FCF operations and material control and accountancy requirements

  16. An hybrid methodology for RL-based behavior coordination in a target following mission with an AUV

    OpenAIRE

    Carreras Pérez, Marc; Yuh, Junku; Batlle i Grabulosa, Joan

    2001-01-01

    Proposes a behavior-based scheme for high-level control of autonomous underwater vehicles (AUVs). Two main characteristics can be highlighted in the control scheme. Behavior coordination is done through a hybrid methodology, which takes in advantages of the robustness and modularity in competitive approaches, as well as optimized trajectories

  17. Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology

    Directory of Open Access Journals (Sweden)

    Chaves Rosa

    2012-07-01

    Full Text Available Abstract Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT and Positron Emission Tomography (PET have been widely used to guide the clinicians in the Alzheimer’s Disease (AD diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT, Principal Component Analysis (PCA or Partial Least Squares (PLS (the two latter also analysed with a LMNN transformation. Regarding the classifiers, kernel Support Vector Machines (SVMs and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i linear transformation of the PLS or PCA reduced data, ii feature reduction technique, and iii classifier (with Euclidean, Mahalanobis or Energy-based methodology. The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT and 90.67%, 88% and 93.33% (for PET, respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between

  18. Innovative development methodology based on the Toyota Way; Innovative Entwicklungsmethodik basierend auf dem Toyota Way

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, T. [Toyota Motor Corp., Aichi (Japan)

    2007-07-01

    Since its foundation, Toyota has been innovating in each process of automobile production based on the ideas shown in the Toyoda Precepts. These innovations are not only innovations in the methodology for various development processes, but also in company philosophy models cultivated by development engineers during their daily job. These are known as the ''Toyota Way'' and its elements which include ''Continuous Improvement'', ''Challenge'', ''Respect for People'', ''Genchi Genbutsu'' and ''Teamwork''. Various technological innovations have already progressed in the process from engine development to production. Examples are the use of CAD (computer aided design) and structure analysis based on 3-D CAD data in hardware development, CFD (computational fluid dynamics) and visualization techniques in combustion system development, MBD (model based development), the related DOE (design of engineering) and new algorithms in software development, as well as new production strategies linked to the new techniques mentioned above. The development of Toyota's direct injection gasoline engine (Toyota D-4 engine) originated from a pre-combustion chamber system in the 1970's with the aim of purifying the exhaust gases when the future of catalyst technology was unpromising. This technology could not be mass-produced, but could subsequently go on to the, R and D of DISC (direct injection stratified charge). In 1996, the stratified charge lean combustion gasoline DI engine could then be launched as the first-generation Toyota D-4 engine. After the second-generation D-4 engine was developed, this technology has now acceded to the dual injection D-4 system (D-4S), which combines it with homogeneous charged stoichiometric combustion and port fuel injection. During this progress, innovative development methods have been used and improvements in production

  19. ANALYSIS OF EFFECTIVENESS OF METHODOLOGICAL SYSTEM FOR PROBABILITY AND STOCHASTIC PROCESSES COMPUTER-BASED LEARNING FOR PRE-SERVICE ENGINEERS

    Directory of Open Access Journals (Sweden)

    E. Chumak

    2015-04-01

    Full Text Available The author substantiates that only methodological training systems of mathematical disciplines with implementation of information and communication technologies (ICT can meet the requirements of modern educational paradigm and make possible to increase the educational efficiency. Due to this fact, the necessity of developing the methodology of theory of probability and stochastic processes computer-based learning for pre-service engineers is underlined in the paper. The results of the experimental study for analysis of the efficiency of methodological system of theory of probability and stochastic processes computer-based learning for pre-service engineers are shown. The analysis includes three main stages: ascertaining, searching and forming. The key criteria of the efficiency of designed methodological system are the level of probabilistic and stochastic skills of students and their learning motivation. The effect of implementing the methodological system of probability theory and stochastic processes computer-based learning on the level of students’ IT literacy is shown in the paper. The expanding of the range of objectives of ICT applying by students is described by author. The level of formation of students’ learning motivation on the ascertaining and forming stages of the experiment is analyzed. The level of intrinsic learning motivation for pre-service engineers is defined on these stages of the experiment. For this purpose, the methodology of testing the students’ learning motivation in the chosen specialty is presented in the paper. The increasing of intrinsic learning motivation of the experimental group students (E group against the control group students (C group is demonstrated.

  20. Methodology of using CFD-based risk assessment in road tunnels

    Directory of Open Access Journals (Sweden)

    Vidmar Peter

    2007-01-01

    Full Text Available The definition of the deterministic approach in the safety analyses comes from the need to understand the conditions that come out during the fire accident in a road tunnel. The key factor of the tunnel operations during the fire is the ventilation, which during the initial fazes of the fire, impact strongly on the evacuation of people and latter on the access of the intervention units in the tunnel. The paper presents the use of the computational fluid dynamics model in the tunnel safety assessment process. The model is validated by comparing data with experimental and quantifying the differences. The set-up of the initial and boundary conditions and the requirement for grid density found during the validation tests is used to prepare three kind of fire scenarios 20 MW, 50 MW, and 100 MW, with different ventilation conditions; natural, semi transverse, full transverse, and longitudinal ventilation. The observed variables, soot density and temperature, are presented in minutes time steps trough the entire tunnel length. Comparing the obtained data in a table, allows the analyses of the ventilation conditions for different heat releases from fires. The second step is to add additional criteria of human behaviour inside the tunnel (evacuation and human resistance to the elevated gas concentrations and temperature. What comes out is a fully deterministic risk matrix that is based on the calculated data where the risk is ranged on five levels, from the lowest to a very danger level. The deterministic risk matrix represents the alternative to a probabilistic safety assessment methodology, where the fire risk is represented in detail as well as the computational fluid dynamics model results are physically correct. .

  1. A human error probability estimate methodology based on fuzzy inference and expert judgment on nuclear plants

    International Nuclear Information System (INIS)

    Nascimento, C.S. do; Mesquita, R.N. de

    2009-01-01

    Recent studies point human error as an important factor for many industrial and nuclear accidents: Three Mile Island (1979), Bhopal (1984), Chernobyl and Challenger (1986) are classical examples. Human contribution to these accidents may be better understood and analyzed by using Human Reliability Analysis (HRA), which has being taken as an essential part on Probabilistic Safety Analysis (PSA) of nuclear plants. Both HRA and PSA depend on Human Error Probability (HEP) for a quantitative analysis. These probabilities are extremely affected by the Performance Shaping Factors (PSF), which has a direct effect on human behavior and thus shape HEP according with specific environment conditions and personal individual characteristics which are responsible for these actions. This PSF dependence raises a great problem on data availability as turn these scarcely existent database too much generic or too much specific. Besides this, most of nuclear plants do not keep historical records of human error occurrences. Therefore, in order to overcome this occasional data shortage, a methodology based on Fuzzy Inference and expert judgment was employed in this paper in order to determine human error occurrence probabilities and to evaluate PSF's on performed actions by operators in a nuclear power plant (IEA-R1 nuclear reactor). Obtained HEP values were compared with reference tabled data used on current literature in order to show method coherence and valid approach. This comparison leads to a conclusion that this work results are able to be employed both on HRA and PSA enabling efficient prospection of plant safety conditions, operational procedures and local working conditions potential improvements (author)

  2. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  3. Theoretical and methodological bases of studying the symbolization of social and political reality in transit societies

    Directory of Open Access Journals (Sweden)

    O. V. Slavina

    2014-10-01

    Full Text Available This article is an attempt to form a methodological foundation to explore the process of symbolic constructioning of reality in the political systems in a state of democratic transition. From the author’s point of view, such transit systems differ with the phenomenal features of transitional type of sign-symbolic context. There are the most significant of them: the confrontation of symbols of old and new, and the formation of public anxiety due to violation of the established values (significant symbols. The result of these processes is the emergence of the conditions for increasing capacity of perception of new symbols (re-symbolization, transmigration of symbolic forms, the appearance of spontaneous symbolic interactions in the community in the form of political protests, rallies, and panic. In this regard, it is necessary to understand the possibilities of the productive management of the collective consciousness in transit period to achieve mental solidarity of concrete society with democratic values. To perform this task, author develops the appropriate tools, which are based on the phenomenological theory, the Schutz’s theory of the constitution of the multiple realities, the philosophy of symbolic forms of E. Cassirer, the theory of social construction of P. Berger and T. Luckmann, as well as Lotman’s semiotic concept. It is concluded that in the collision of alternative symbolic projects of social order it is advisable to resort to controlled symbolization (the production of special symbolic codes of political legitimation. At the same time it is important to understand the mechanisms of auto- symbolization of the society (changing of mass consciousness by virtue of the progressive development of the political culture of people. Careless use of these technologies in the countries with non-consolidated democracy may become a factor of destabilization and formation of the conditions for authoritarian rollback.

  4. Remedial action assessment system (RAAS) - A computer-based methodology for conducting feasibility studies

    International Nuclear Information System (INIS)

    Buelt, J.L.; Stottlemyre, J.A.; White, M.K.

    1991-01-01

    Because of the great complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, the DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process required for DOE operable units. DOE operable units are generally more complex in nature because of the existence of multiple waste sites within many of the operable units and the presence of mixed radioactive and hazardous chemical wastes. Consequently, Pacific Northwest Laboratory (PNL) is developing the Remedial Action Assessment System (RAAS), which is aimed at screening, linking, and evaluating established technology process options in support of conducting feasibility studies under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It is also intended to do the same in support of corrective measures studies required by the Resource Conservation and Recovery Act (RCRA). One of the greatest attributes of the RAAS project is that the computer interface with the user is being designed to be friendly, intuitive, and interactive. Consequently, the user interface employs menus, windows, help features, and graphical information while RAAS is in operation. During operation, each technology process option is represented by an open-quotes objectclose quotes module. Object-oriented programming is then used to link these unit processes into remedial alternatives. In this way, various object modules representing technology process options can communicate so that a linked set of compatible processes form an appropriate remedial alternative. Once the remedial alternatives are formed, they can be evaluated in terms of effectiveness, implementability, and cost

  5. INCREASING THE AIRCRAFT AIRWORTHINESS MAINTENANCE EFICIENCY BASED ON THE PROJECT MANAGEMENT METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Alexander Abramovich Itskovich

    2017-01-01

    Full Text Available The interrelation between aircraft airworthiness maintenance process (AAMP and the process of project man- agement methodology application is demonstrated.A project portfolio can be formed based on the strategic objectives. The projects with the highest priority are car- ried out, including those which strive to improve the efficiency of AAMP. The proposed approach allows to find the priori- ties of specific projects realization, which are included in the airline project portfolio.The project aimed to improve the efficiency of the AAMP of AN-124-100 of "Volga-Dnepr Airlines" is presented as an example. The statistical data analysis of failures of AN-124-100 fleet has demonstrated that wing components most frequently fail, especially spoiler sections, which are subjected to honeycomb skin mass exfoliation and need to be modi- fied. One of the expected project results should be the К1000 reduction of the wing spoilers not less than for 40 % and, respectively, the plane in total - not less than for 4 %.The work is executed in full compliance with the standards of project management. The passport of the project is given, which contains all the necessary information about the project: its goals, outcomes, results, timelines, action plan, budget and participants. A special attention is paid to the risks of the project, their probability assessment and the actions for overcoming possible consequences. It is shown that the implementation of the project "Introduction of aircraft AN-124-100 spoilers technology modi- fication" allows to improve a number of production and technical efficiency indicators, with material, financial and organi- zational resources optimization.

  6. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  7. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  8. Design-based research as a “smart” methodology for studying learning in the context of work

    DEFF Research Database (Denmark)

    Kolbæk, Ditte

    Although Design-based Research (DBR) was developed for investigating class-room training this paper discusses methodological issues when DBR is employed for investigating learning in the context of work, as it is an authentic learning environment, a real-world setting for fostering learning...... and creating usable knowledge and knowing. The purpose of this paper is to provide new perspectives on DBR regarding how to conduct DBR for studying learning from experience in the context of work. The research question is: What to consider to make DBR a smart methodology for exploring learning from experience...

  9. A grey-based group decision-making methodology for the selection of hydrogen technologiess in Life Cycle Sustainability perspective

    DEFF Research Database (Denmark)

    Manzardo, Alessandro; Ren, Jingzheng; Mazzi, Anna

    2012-01-01

    The objective of this research is to develop a grey-based group decision-making methodology for the selection of the best renewable energy technology (including hydrogen) using a life cycle sustainability perspective. The traditional grey relational analysis has been modified to better address...... the issue of uncertainty. The proposed methodology allows multi-person to participate in the decision-making process and to give linguistic evaluation on the weights of the criteria and the performance of the alternative technologies. In this paper, twelve hydrogen production technologies have been assessed...... using the proposed methodology, electrolysis of water technology by hydropower has been considered to be the best technology for hydrogen production according to the decision-making group....

  10. A k-mer-based barcode DNA classification methodology based on spectral representation and a neural gas network.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; Rizzo, Riccardo; Urso, Alfonso

    2015-07-01

    In this paper, an alignment-free method for DNA barcode classification that is based on both a spectral representation and a neural gas network for unsupervised clustering is proposed. In the proposed methodology, distinctive words are identified from a spectral representation of DNA sequences. A taxonomic classification of the DNA sequence is then performed using the sequence signature, i.e., the smallest set of k-mers that can assign a DNA sequence to its proper taxonomic category. Experiments were then performed to compare our method with other supervised machine learning classification algorithms, such as support vector machine, random forest, ripper, naïve Bayes, ridor, and classification tree, which also consider short DNA sequence fragments of 200 and 300 base pairs (bp). The experimental tests were conducted over 10 real barcode datasets belonging to different animal species, which were provided by the on-line resource "Barcode of Life Database". The experimental results showed that our k-mer-based approach is directly comparable, in terms of accuracy, recall and precision metrics, with the other classifiers when considering full-length sequences. In addition, we demonstrate the robustness of our method when a classification is performed task with a set of short DNA sequences that were randomly extracted from the original data. For example, the proposed method can reach the accuracy of 64.8% at the species level with 200-bp fragments. Under the same conditions, the best other classifier (random forest) reaches the accuracy of 20.9%. Our results indicate that we obtained a clear improvement over the other classifiers for the study of short DNA barcode sequence fragments. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Project-Based Learning and Agile Methodologies in Electronic Courses: Effect of Student Population and Open Issues

    Directory of Open Access Journals (Sweden)

    Marina Zapater

    2013-12-01

    Full Text Available Project-Based Learning (PBL and Agile methodologies have proven to be very interesting instructional strategies in Electronics and Engineering education, because they provide practical learning skills that help students understand the basis of electronics. In this paper we analyze two courses, one belonging to a Master in Electronic Engineering and one to a Bachelor in Telecommunication Engineering that apply Agile-PBL methodologies, and compare the results obtained in both courses with a traditional laboratory course. Our results support previous work stating that Agile-PBL methodologies increase student satisfaction. However, we also highlight some open issues that negatively affect the implementation of these methodologies,such as planning overhead or accidental complexity. Moreover,we show how differences in the student population, mostly related to the time spent on-campus, their commitment to the course or part-time dedication, have an impact on the benefits of Agile-PBL methods. In these cases, Agile-PBL methodologies by themselves are not enough and need to be combined with other techniques to increase student motivation.

  12. A methodology aimed at fostering and sustaining the development processes of an IE-based industry

    Science.gov (United States)

    Corallo, Angelo; Errico, Fabrizio; de Maggio, Marco; Giangreco, Enza

    In the current competitive scenario, where business relationships are fundamental in building successful business models and inter/intra organizational business processes are progressively digitalized, an end-to-end methodology is required that is capable of guiding business networks through the Internetworked Enterprise (IE) paradigm: a new and innovative organizational model able to leverage Internet technologies to perform real-time coordination of intra and inter-firm activities, to create value by offering innovative and personalized products/services and reduce transaction costs. This chapter presents the TEKNE project Methodology of change that guides business networks, by means of a modular and flexible approach, towards the IE techno-organizational paradigm, taking into account the competitive environment of the network and how this environment influences its strategic, organizational and technological levels. Contingency, the business model, enterprise architecture and performance metrics are the key concepts that form the cornerstone of this methodological framework.

  13. Safety assessment of a vault-based disposal facility using the ISAM methodology

    International Nuclear Information System (INIS)

    Kelly, E.; Kim, C.-L.; Lietava, P.; Little, R.; Simon, I.

    2002-01-01

    As part of the IAEA's Co-ordinated Research Project (CRP) on Improving Long-term of Safety Assessment Methodologies for Near Surface Waste Disposal Facilities (ISAM), three example cases were developed. The aim was to testing the ISAM safety assessment methodology using as realistic as possible data. One of the Test Cases, the Vault Test Case (VTC), related to the disposal of low level radioactive waste (LLW) to a hypothetical facility comprising a set of above surface vaults. This paper uses the various steps of the ISAM safety assessment methodology to describe the work undertaken by ISAM participants in developing the VTC and provides some general conclusions that can be drawn from the findings of their work. (author)

  14. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    Science.gov (United States)

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  15. Environmental external effects from wind power based on the EU ExternE methodology

    DEFF Research Database (Denmark)

    Ibsen, Liselotte Schleisner; Nielsen, Per Sieverts

    1998-01-01

    of the Danish part of the project is to implement the framework for externality evaluation, for three different power plants located in Denmark. The paper will focus on the assessment of the impacts of the whole fuel cycles for wind, natural gas and biogas. Priority areas for environmental impact assessment......The European Commission has launched a major study project, ExternE, to develop a methodology to quantify externalities. A “National Implementation Phase”, was started under the Joule II programme with the purpose of implementing the ExternE methodology in all member states. The main objective...

  16. Crack Growth-Based Predictive Methodology for the Maintenance of the Structural Integrity of Repaired and Nonrepaired Aging Engine Stationary Components

    National Research Council Canada - National Science Library

    Barron, Michael

    1999-01-01

    .... Specifically, the FAA's goal was to develop "Crack Growth-Based Predictive Methodologies for the Maintenance of the Structural Integrity of Repaired and Nonrepaired Aging Engine Stationary Components...

  17. Hsp90 depletion goes wild

    Directory of Open Access Journals (Sweden)

    Siegal Mark L

    2012-02-01

    Full Text Available Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to revealing cryptic genetic variation. See research article http://wwww.biomedcentral.com/1471-2148/12/25

  18. A density-based topology optimization methodology for thermoelectric energy conversion problems

    DEFF Research Database (Denmark)

    Lundgaard, Christian; Sigmund, Ole

    2018-01-01

    , temperature dependent material parameters and relevant objective functions. Comprehensive implementation details of the methodology are provided, and seven different design problems are solved and discussed to demonstrate that the approach is well-suited for optimizing TEGs and TECs. The study reveals new...

  19. Context Based Inferences in Research Methodology: The Role of Culture in Justifying Knowledge Claims

    Science.gov (United States)

    Evers, Colin W.; Mason, Mark

    2011-01-01

    Drawing on work in epistemology and the philosophy of science, this paper seeks to provide very general reasons for why a comparative perspective needs to be applied to the inferential procedures of research methodologies where these concern the issue of justifying knowledge claims. In particular, the paper explores the role of culture on a number…

  20. A Neutronics Methodology for the NIST Research Reactor Based on MCNXP

    International Nuclear Information System (INIS)

    Hanson, A.; Diamond, D.

    2011-01-01

    A methodology for calculating inventories for the NBSR has been developed using the MCNPX computer code with the BURN option. A major advantage of the present methodology over the previous methodology, where MONTEBURNS and MCNP5 were used, is that more materials can be included in the model. The NBSR has 30 fuel elements each with a 17.8 cm (7 in) gap in the middle of the fuel. In the startup position, the shim control arms are partially inserted in the top half of the core. During the 38.5 day cycle, the shim arms are slowly removed to their withdrawn (horizontal) positions. This movement of shim arms causes asymmetries between the burnup of the fuel in the upper and lower halves and across the line of symmetry for the fuel loading. With the MONTEBURNS analyses there was a limitation to the number of materials that could be analyzed so 15 materials in the top half of the core and 15 materials in the bottom half of the core were used, and a half-core (east-west) symmetry was assumed. Since MCNPX allows more materials, this east-west symmetry was not necessary and the core was represented with 60 different materials. The methodology for developing the inventories is presented along with comparisons of neutronic parameters calculated with the previous and present sets of inventories.

  1. A neutronics methodology for the NIST research reactor based on MCNPX

    International Nuclear Information System (INIS)

    Hanson, Albert; Diamond, David

    2011-01-01

    A methodology for calculating inventories for the NBSR has been developed using the MCNPX computer code with the BURN option. A major advantage of the present methodology over the previous methodology, where MONTEBURNS and MCNP5 were used, is that more materials can be included in the model. The NBSR has 30 fuel elements each with a 17.8 cm (7 in) gap in the middle of the fuel. In the startup position, the shim control arms are partially inserted in the top half of the core. During the 38.5 day cycle, the shim arms are slowly removed to their withdrawn (horizontal) positions. This movement of shim arms causes asymmetries between the burnup of the fuel in the upper and lower halves and across the line of symmetry for the fuel loading. With the MONTEBURNS analyses there was a limitation to the number of materials that could be analyzed so 15 materials in the top half of the core and 15 materials in the bottom half of the core were used, and a half-core (east-west) symmetry was assumed. Since MCNPX allows more materials, this east-west symmetry was not necessary and the core was represented with 60 different materials. The methodology for developing the inventories is presented along with comparisons of neutronic parameters calculated with the previous and present sets of inventories. (author)

  2. Stability of Circulating Blood-Based MicroRNAs - Pre-Analytic Methodological Considerations

    DEFF Research Database (Denmark)

    Glinge, Charlotte; Clauss, Sebastian; Boddum, Kim

    2017-01-01

    BACKGROUND AND AIM: The potential of microRNAs (miRNA) as non-invasive diagnostic, prognostic, and predictive biomarkers, as well as therapeutic targets, has recently been recognized. Previous studies have highlighted the importance of consistency in the methodology used, but to our knowledge, no...

  3. Web-Based Collaborative Writing in L2 Contexts: Methodological Insights from Text Mining

    Science.gov (United States)

    Yim, Soobin; Warschauer, Mark

    2017-01-01

    The increasingly widespread use of social software (e.g., Wikis, Google Docs) in second language (L2) settings has brought a renewed attention to collaborative writing. Although the current methodological approaches to examining collaborative writing are valuable to understand L2 students' interactional patterns or perceived experiences, they can…

  4. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    Science.gov (United States)

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  5. Analysis of market competitive structure: The new methodological approach based in the using

    International Nuclear Information System (INIS)

    Romero de la Fuente, J.; Yague Guillen, M. J.

    2007-01-01

    This paper proposes a new methodological approach to identify market competitive structure, applying usage situation concept in positioning analysis. Dimensions used by consumer to classify products are identified using Correspondence Analysis and competitive groups are formed. Results are validated with Discriminant Analysis. (Author) 23 refs

  6. Methodological Flaws in Corpus-Based Studies on Malaysian ESL Textbooks

    Science.gov (United States)

    Zarifi, Abdolvahed; Mukundan, Jayakaran; Rezvani Kalajahi, Seyed Ali

    2014-01-01

    With the increasing interest among the pedagogy researchers in the use of corpus linguistics methodologies to study textbooks, there has emerged a similar enthusiasm among the materials developers to draw on empirical findings in the development of the state-of-the-art curricula and syllabi. In order for these research findings to have their…

  7. Methodological challenges of optical tweezers-based X-ray fluorescence imaging of biological model organisms at synchrotron facilities.

    Science.gov (United States)

    Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo

    2015-07-01

    Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.

  8. Health and environmental impact of depleted uranium

    International Nuclear Information System (INIS)

    Furitsu, Katsumi

    2010-01-01

    Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238 U and is depleted in the fissionable isotope 235 U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first

  9. RANS Based Methodology for Predicting the Influence of Leading Edge Erosion on Airfoil Performance

    Energy Technology Data Exchange (ETDEWEB)

    Langel, Christopher M. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Chow, Raymond C. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; van Dam, C. P. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Maniaci, David Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technologies Dept.

    2017-10-01

    The impact of surface roughness on flows over aerodynamically designed surfaces is of interested in a number of different fields. It has long been known the surface roughness will likely accelerate the laminar- turbulent transition process by creating additional disturbances in the boundary layer. However, there are very few tools available to predict the effects surface roughness will have on boundary layer flow. There are numerous implications of the premature appearance of a turbulent boundary layer. Increases in local skin friction, boundary layer thickness, and turbulent mixing can impact global flow properties compounding the effects of surface roughness. With this motivation, an investigation into the effects of surface roughness on boundary layer transition has been conducted. The effort involved both an extensive experimental campaign, and the development of a high fidelity roughness model implemented in a R ANS solver. Vast a mounts of experimental data was generated at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel for the calibration and validation of the roughness model described in this work, as well as future efforts. The present work focuses on the development of the computational model including a description of the calibration process. The primary methodology presented introduces a scalar field variable and associated transport equation that interacts with a correlation based transition model. The additional equation allows for non-local effects of surface roughness to be accounted for downstream of rough wall sections while maintaining a "local" formulation. The scalar field is determined through a boundary condition function that has been calibrated to flat plate cases with sand grain roughness. The model was initially tested on a NACA 0012 airfoil with roughness strips applied to the leading edge. Further calibration of the roughness model was performed using results from the companion experimental study on a NACA 633 -418 airfoil

  10. Generation and validation of ORIGEN-S libraries for depletion and transmutation calculations based on JEF2.2 and EAF3 basic data

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft Univ. of Technol. (Netherlands). Interfaculty Reactor Inst.; Kloosterman, J.L. [Netherlands Energy Research Foundation ECN, P.O. Box 1, 1755 ZG Petten (Netherlands)

    1997-07-01

    The data libraries for light elements, actinides and fission products of the ORIGEN-S code for depletion and transmutation calculations in the SCALE4.1 computer code system have been updated with respect to cross-section data, radioactive-decay data and fission-product yield data using JEF2.2 as the basic data source and EAF3 as an additional source. This required the fission-product library to be extended with 201 new fission-product nuclides or isomeric states. The effect of the update of different quantities involved is evaluated with a burn-up benchmark. When ORIGEN-S is used as a stand-alone code, i.e. without regular update of cross-sections of the major nuclides due to changes in the neutron spectrum during burn-up, the results show appreciable differences in actinide and fission-product densities due to the cross-section update. The effects of updates of decay data and fission-product yields are generally small, but with noticeable exceptions. The update of fission and capture reaction energies gives a small but systematic change in actinide and fission-product concentration. The new ORIGEN-S libraries have also been converted for use with the SCALE4.2 package. (orig.)

  11. Generation and validation of ORIGEN-S libraries for depletion and transmutation calculations based on JEF2.2 and EAF3 basic data

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1997-01-01

    The data libraries for light elements, actinides and fission products of the ORIGEN-S code for depletion and transmutation calculations in the SCALE4.1 computer code system have been updated with respect to cross-section data, radioactive-decay data and fission-product yield data using JEF2.2 as the basic data source and EAF3 as an additional source. This required the fission-product library to be extended with 201 new fission-product nuclides or isomeric states. The effect of the update of different quantities involved is evaluated with a burn-up benchmark. When ORIGEN-S is used as a stand-alone code, i.e. without regular update of cross-sections of the major nuclides due to changes in the neutron spectrum during burn-up, the results show appreciable differences in actinide and fission-product densities due to the cross-section update. The effects of updates of decay data and fission-product yields are generally small, but with noticeable exceptions. The update of fission and capture reaction energies gives a small but systematic change in actinide and fission-product concentration. The new ORIGEN-S libraries have also been converted for use with the SCALE4.2 package. (orig.)

  12. Human podocyte depletion in association with older age and hypertension.

    Science.gov (United States)

    Puelles, Victor G; Cullen-McEwen, Luise A; Taylor, Georgina E; Li, Jinhua; Hughson, Michael D; Kerr, Peter G; Hoy, Wendy E; Bertram, John F

    2016-04-01

    Podocyte depletion plays a major role in the development and progression of glomerulosclerosis. Many kidney diseases are more common in older age and often coexist with hypertension. We hypothesized that podocyte depletion develops in association with older age and is exacerbated by hypertension. Kidneys from 19 adult Caucasian American males without overt renal disease were collected at autopsy in Mississippi. Demographic data were obtained from medical and autopsy records. Subjects were categorized by age and hypertension as potential independent and additive contributors to podocyte depletion. Design-based stereology was used to estimate individual glomerular volume and total podocyte number per glomerulus, which allowed the calculation of podocyte density (number per volume). Podocyte depletion was defined as a reduction in podocyte number (absolute depletion) or podocyte density (relative depletion). The cortical location of glomeruli (outer or inner cortex) and presence of parietal podocytes were also recorded. Older age was an independent contributor to both absolute and relative podocyte depletion, featuring glomerular hypertrophy, podocyte loss, and thus reduced podocyte density. Hypertension was an independent contributor to relative podocyte depletion by exacerbating glomerular hypertrophy, mostly in glomeruli from the inner cortex. However, hypertension was not associated with podocyte loss. Absolute and relative podocyte depletion were exacerbated by the combination of older age and hypertension. The proportion of glomeruli with parietal podocytes increased with age but not with hypertension alone. These findings demonstrate that older age and hypertension are independent and additive contributors to podocyte depletion in white American men without kidney disease. Copyright © 2016 the American Physiological Society.

  13. Review of Seismic Evaluation Methodologies for Nuclear Power Plants Based on a Benchmark Exercise

    International Nuclear Information System (INIS)

    2013-11-01

    quantification of the effect of different analytical approaches on the response of the piping system under single and multi-support input motions), the spent fuel pool (to estimate the sloshing frequencies, maximum wave height and spilled water amount, and predict free surface evolution), and the pure water tank (to predict the observed buckling modes of the pure water tank). Analyses of the main results include comparison between different computational models, variability of results among participants, and comparison of analysis results with recorded ones. This publication addresses state of the practice for seismic evaluation and margin assessment methodologies for SSCs in NPPs based on the KARISMA benchmark exercise. As such, it supports and complements other IAEA publications with respect to seismic safety of new and existing nuclear installations. It was developed within the framework of International Seismic Safety Centre activities. It provides detailed guidance on seismic analysis, seismic design and seismic safety re-evaluation of nuclear installations and will be of value to researchers, operating organizations, regulatory authorities, vendors and technical support organizations

  14. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  15. Applying distance-to-target weighing methodology to evaluate the environmental performance of bio-based energy, fuels, and materials

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin; Heilmeier, Hermann; Bringezu, Stefan

    2007-01-01

    The enhanced use of biomass for the production of energy, fuels, and materials is one of the key strategies towards sustainable production and consumption. Various life cycle assessment (LCA) studies demonstrate the great potential of bio-based products to reduce both the consumption of non-renewable energy resources and greenhouse gas emissions. However, the production of biomass requires agricultural land and is often associated with adverse environmental effects such as eutrophication of surface and ground water. Decision making in favor of or against bio-based and conventional fossil product alternatives therefore often requires weighing of environmental impacts. In this article, we apply distance-to-target weighing methodology to aggregate LCA results obtained in four different environmental impact categories (i.e., non-renewable energy consumption, global warming potential, eutrophication potential, and acidification potential) to one environmental index. We include 45 bio- and fossil-based product pairs in our analysis, which we conduct for Germany. The resulting environmental indices for all product pairs analyzed range from -19.7 to +0.2 with negative values indicating overall environmental benefits of bio-based products. Except for three options of packaging materials made from wheat and cornstarch, all bio-based products (including energy, fuels, and materials) score better than their fossil counterparts. Comparing the median values for the three options of biomass utilization reveals that bio-energy (-1.2) and bio-materials (-1.0) offer significantly higher environmental benefits than bio-fuels (-0.3). The results of this study reflect, however, subjective value judgments due to the weighing methodology applied. Given the uncertainties and controversies associated not only with distance-to-target methodologies in particular but also with weighing approaches in general, the authors strongly recommend using weighing for decision finding only as a

  16. The IAEA collaborating centre for neutron activation based methodologies of research reactors

    International Nuclear Information System (INIS)

    Bode, P.

    2010-01-01

    The Reactor Institute Delft of the Delft University of Technology houses the Netherlands' only academic nuclear research reactor, with associated instrumentation and laboratories, for scientific education and research with ionizing radiation. The Institute's swimming pool type research reactor reached first criticality in 1963 and is currently operated at 2MW thermal powers on a 100 h/week basis. The reactor is equipped with neutron mirror guides serving ultra modern neutron beam physics instruments and with a very bright positron facility. Fully automated gamma-ray spectrometry systems are used by the laboratory for neutron activation analysis, providing large scale services under an ISO/IEC 17025:2005 compliant management system, being (since 1993) the first accredited laboratory of its kind in the world. Already for several years, this laboratory is sustainable by rendering these services to both the public and the private sector. The prime user of the Institute's fac ilities is the scientific Research Department of Radiation, Radionuclide and Reactors of the Faculty of Applied Sciences, housed inside the building. All reactor facilities are also made available for use by or for services to, external clients (industry, government, private sector, other (international research institutes and universities). The Reactor Institute Delft was inaugurated in May 2009 as a new lAEA Collaborating Centre for Neutron Activation Based Methodologies of Research Reactors. The collaboration involves education, research and development in (I) Production of reactor-produced, no-carrier added radioisotopes of high specific activity via neutron activation; (II) Neutron activation analysis with emphasis on automation as well as analysis of large samples, and radiotracer techniques and as a cross-cutting activity, (IIl) Quality assurance and management in research and application of research reactor based techniques and in research reactor operations. This c ollaboration will

  17. A GIS-based methodology for the estimation of potential volcanic damage and its application to Tenerife Island, Spain

    Science.gov (United States)

    Scaini, C.; Felpeto, A.; Martí, J.; Carniel, R.

    2014-05-01

    This paper presents a GIS-based methodology to estimate damages produced by volcanic eruptions. The methodology is constituted by four parts: definition and simulation of eruptive scenarios, exposure analysis, vulnerability assessment and estimation of expected damages. Multi-hazard eruptive scenarios are defined for the Teide-Pico Viejo active volcanic complex, and simulated through the VORIS tool. The exposure analysis identifies the elements exposed to the hazard at stake and focuses on the relevant assets for the study area. The vulnerability analysis is based on previous studies on the built environment and complemented with the analysis of transportation and urban infrastructures. Damage assessment is performed associating a qualitative damage rating to each combination of hazard and vulnerability. This operation consists in a GIS-based overlap, performed for each hazardous phenomenon considered and for each element. The methodology is then automated into a GIS-based tool using an ArcGIS® program. Given the eruptive scenarios and the characteristics of the exposed elements, the tool produces expected damage maps. The tool is applied to the Icod Valley (North of Tenerife Island) which is likely to be affected by volcanic phenomena in case of eruption from both the Teide-Pico Viejo volcanic complex and North-West basaltic rift. Results are thematic maps of vulnerability and damage that can be displayed at different levels of detail, depending on the user preferences. The aim of the tool is to facilitate territorial planning and risk management in active volcanic areas.

  18. Depletion field focusing in semiconductors

    NARCIS (Netherlands)

    Prins, M.W.J.; Gelder, Van A.P.

    1996-01-01

    We calculate the three-dimensional depletion field profile in a semiconductor, for a planar semiconductor material with a spatially varying potential upon the surface, and for a tip-shaped semiconductor with a constant surface potential. The nonuniform electric field gives rise to focusing or

  19. Depletion interactions in lyotropic nematics

    NARCIS (Netherlands)

    Schoot, van der P.P.A.M.

    2000-01-01

    A theoretical study of depletion interactions between pairs of small, globular colloids dispersed in a lyotropic nematic of hard, rodlike particles is presented. We find that both the strength and range of the interaction crucially depends on the configuration of the spheres relative to the nematic

  20. Depleted uranium: an explosive dossier

    International Nuclear Information System (INIS)

    Barrillot, B.

    2001-01-01

    This book relates the history of depleted uranium, contemporaneous with the nuclear bomb history. Initially used in nuclear weapons and in experiments linked with nuclear weapons development, this material has been used also in civil industry, in particular in aeronautics. However, its properties made it interesting for military applications all along the 'cold war'. (J.S.)

  1. Global depletion of groundwater resources

    NARCIS (Netherlands)

    Wada, Y.; Beek, L.P.H. van; van Kempen, C.M.; Reckman, J.W.T.M.; Vasak, S.; Bierkens, M.F.P.

    2010-01-01

    In regions with frequent water stress and large aquifer systems groundwater is often used as an additional water source. If groundwater abstraction exceeds the natural groundwater recharge for extensive areas and long times, overexploitation or persistent groundwater depletion occurs. Here we

  2. Impact of mineral resource depletion

    CSIR Research Space (South Africa)

    Brent, AC

    2006-09-01

    Full Text Available In a letter to the editor, the authors comment on BA Steen's article on "Abiotic Resource Depletion: different perceptions of the problem with mineral deposits" published in the special issue of the International Journal of Life Cycle Assessment...

  3. Vulnerability curves vs. vulnerability indicators: application of an indicator-based methodology for debris-flow hazards

    Science.gov (United States)

    Papathoma-Köhle, Maria

    2016-08-01

    The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analysing and, if possible, quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for its reduction. The most common methods for assessing physical vulnerability are vulnerability matrices, vulnerability curves and vulnerability indicators; however, in most of the cases, these methods are used in a conflicting way rather than in combination. The article focuses on two of these methods: vulnerability curves and vulnerability indicators. Vulnerability curves express physical vulnerability as a function of the intensity of the process and the degree of loss, considering, in individual cases only, some structural characteristics of the affected buildings. However, a considerable amount of studies argue that vulnerability assessment should focus on the identification of these variables that influence the vulnerability of an element at risk (vulnerability indicators). In this study, an indicator-based methodology (IBM) for mountain hazards including debris flow (Kappes et al., 2012) is applied to a case study for debris flows in South Tyrol, where in the past a vulnerability curve has been developed. The relatively "new" indicator-based method is being scrutinised and recommendations for its improvement are outlined. The comparison of the two methodological approaches and their results is challenging since both methodological approaches deal with vulnerability in a different way. However, it is still possible to highlight their weaknesses and strengths, show clearly that both methodologies are necessary for the assessment of physical vulnerability and provide a preliminary "holistic methodological framework" for physical vulnerability assessment showing how the two approaches may be used in combination in the future.

  4. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    Science.gov (United States)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-06-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  5. The Researching on Evaluation of Automatic Voltage Control Based on Improved Zoning Methodology

    Science.gov (United States)

    Xiao-jun, ZHU; Ang, FU; Guang-de, DONG; Rui-miao, WANG; De-fen, ZHU

    2018-03-01

    According to the present serious phenomenon of increasing size and structure of power system, hierarchically structured automatic voltage control(AVC) has been the researching spot. In the paper, the reduced control model is built and the adaptive reduced control model is researched to improve the voltage control effect. The theories of HCSD, HCVS, SKC and FCM are introduced and the effect on coordinated voltage regulation caused by different zoning methodologies is also researched. The generic framework for evaluating performance of coordinated voltage regulation is built. Finally, the IEEE-96 stsyem is used to divide the network. The 2383-bus Polish system is built to verify that the selection of a zoning methodology affects not only the coordinated voltage regulation operation, but also its robustness to erroneous data and proposes a comprehensive generic framework for evaluating its performance. The New England 39-bus network is used to verify the adaptive reduced control models’ performance.

  6. Business analysis methodology in telecommunication industry – the research based on the grounded theory

    Directory of Open Access Journals (Sweden)

    Hana Nenickova

    2013-10-01

    Full Text Available The objective of this article is to present the grounded theory using in the qualitative research as a basis to build a business analysis methodology for the implementation of information systems in telecommunication enterprises in Czech Republic. In the preparation of the methodology I have used the current needs of telecommunications companies, which are characterized mainly by high dependence on information systems. Besides that, this industry is characterized by high flexibility and competition and compressing of the corporate strategy timeline. The grounded theory of business analysis defines the specifics of the telecommunications industry, focusing on the very specific description of the procedure for collecting the business requirements and following the business strategy.

  7. A simple nanoindentation-based methodology to assess the strength of brittle thin films

    International Nuclear Information System (INIS)

    Borrero-Lopez, Oscar; Hoffman, Mark; Bendavid, Avi; Martin, Phil J.

    2008-01-01

    In this work, we report a simple methodology to assess the mechanical strength of sub-micron brittle films. Nanoindentation of as-deposited tetrahedral amorphous carbon (ta-C) and Ti-Si-N nanocomposite films on silicon substrates followed by cross-sectional examination of the damage with a focused ion beam (FIB) miller allows the occurrence of cracking to be assessed in comparison with discontinuities (pop-ins) in the load-displacement curves. Strength is determined from the critical loads at which first cracking occurs using the theory of plates on a soft foundation. This methodology enables Weibull plots to be readily obtained, avoiding complex freestanding-film machining processes. This is of great relevance, since the mechanical strength of thin films ultimately controls their reliable use in a broad range of functional uses such as tribological coatings, magnetic drives, MEMS and biomedical applications

  8. The bases for the differences in the training methodology for male and female athletes

    Directory of Open Access Journals (Sweden)

    Владимир Платонов

    2017-09-01

    Full Text Available The analytical review article presents the data reflecting the need for significant differentiation of the methodology of sports training for male and female athletes, which unfortunately is ignored in sports practice and is not adequately reflected in the vast majority of publications in the field of theory and methodology of sports training. This differentiation can be attributed to the following main components: the physique, strength qualities and flexibility; the energy systems; the peculiarities of the psyche and behavioral reactions; the menstrual cycle; female athlete triad; hyperandrogenism; pregnancy and parturition; and age dependence of sports performance. The clearly insufficient consideration of the peculiarities of the female body not only does not allow to fully use the natural talent of athletes for achieving the highest attainable sports performance, but also may with high probability disturb the normal age-related development and produce serious health problems in female athletes.

  9. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    Science.gov (United States)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-02-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  10. Multicriteria decision-making analysis based methodology for predicting carbonate rocks' uniaxial compressive strength

    Directory of Open Access Journals (Sweden)

    Ersoy Hakan

    2012-10-01

    Full Text Available

    ABSTRACT

    Uniaxial compressive strength (UCS deals with materials' to ability to withstand axially-directed pushing forces and especially considered to be rock materials' most important mechanical properties. However, the UCS test is an expensive, very time-consuming test to perform in the laboratory and requires high-quality core samples having regular geometry. Empirical equations were thus proposed for predicting UCS as a function of rocks' index properties. Analytical hierarchy process and multiple regression analysis based methodology were used (as opposed to traditional linear regression methods on data-sets obtained from carbonate rocks in NE Turkey. Limestone samples ranging from Devonian to late Cretaceous ages were chosen; travertine-onyx samples were selected from morphological environments considering their surface environmental conditions Test results from experiments carried out on about 250 carbonate rock samples were used in deriving the model. While the hierarchy model focused on determining the most important index properties affecting on UCS, regression analysis established meaningful relationships between UCS and index properties; 0. 85 and 0. 83 positive coefficient correlations between the variables were determined by regression analysis. The methodology provided an appropriate alternative to quantitative estimation of UCS and avoided the need for tedious and time consuming laboratory testing


    RESUMEN

    La resistencia a la compresión uniaxial (RCU trata con la capacidad de los materiales para soportar fuerzas empujantes dirigidas axialmente y, especialmente, es considerada ser uno de las más importantes propiedades mecánicas de

  11. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    Science.gov (United States)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  12. A multicriteria-based methodology for site prioritisation in sediment management.

    Science.gov (United States)

    Alvarez-Guerra, Manuel; Viguri, Javier R; Voulvoulis, Nikolaos

    2009-08-01

    Decision-making for sediment management is a complex task that incorporates the selections of areas for remediation and the assessment of options for any mitigation required. The application of Multicriteria Analysis (MCA) to rank different areas, according to their need for sediment management, provides a great opportunity for prioritisation, a first step in an integrated methodology that finally aims to assess and select suitable alternatives for managing the identified priority sites. This paper develops a methodology that starts with the delimitation of management units within areas of study, followed by the application of MCA methods that allows ranking of these management units, according to their need for remediation. This proposed process considers not only scientific evidence on sediment quality, but also other relevant aspects such as social and economic criteria associated with such decisions. This methodology is illustrated with its application to the case study area of the Bay of Santander, in northern Spain, highlighting some of the implications of utilising different MCA methods in the process. It also uses site-specific data to assess the subjectivity in the decision-making process, mainly reflected through the assignment of the criteria weights and uncertainties in the criteria scores. Analysis of the sensitivity of the results to these factors is used as a way to assess the stability and robustness of the ranking as a first step of the sediment management decision-making process.

  13. Methodology based on genetic heuristics for in-vivo characterizing the patient-specific biomechanical behavior of the breast tissues.

    Science.gov (United States)

    Lago, M A; Rúperez, M J; Martínez-Martínez, F; Martínez-Sanchis, S; Bakic, P R; Monserrat, C

    2015-11-30

    This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work.

  14. EMD-Based Methodology for the Identification of a High-Speed Train Running in a Gear Operating State.

    Science.gov (United States)

    Bustos, Alejandro; Rubio, Higinio; Castejón, Cristina; García-Prada, Juan Carlos

    2018-03-06

    An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation.

  15. Twenty years of Internet-based research at SCiP: A discussion of surviving concepts and new methodologies.

    Science.gov (United States)

    Wolfe, Christopher R

    2017-10-01

    This discussion of the symposium 20 Years of Internet-Based Research at SCiP: Surviving Concepts, New Methodologies compares the issues faced by the pioneering Internet-based psychology researchers who presented at the first symposia on the topic, at the 1996 annual meeting of the Society for Computers in Psychology, to the issues facing researchers today. New methodologies unavailable in the early days of Web-based psychological research are discussed, with an emphasis on mobile computing with smartphones that is capitalizing on capabilities such as touch screens and gyro sensors. A persistent issue spanning the decades has been the challenge of conducting scientific research with consumer-grade electronics. In the 1996 symposia on Internet-based research, four advantages were identified: easy access to a geographically unlimited subject population, including subjects from very specific and previously inaccessible target populations; bringing the experiment to the subject; high statistical power through large sample size; and reduced cost. In retrospect, it appears that Internet-based research has largely lived up to this early promise-with the possible exception of sample size, since the public demand for controlled psychology experiments has not always been greater than the supply offered by researchers. There are many reasons for optimism about the future of Internet-based research. However, unless courses and textbooks on psychological research methods begin to give Web-based research the attention it deserves, the future of Internet-based psychological research will remain in doubt.

  16. An Indicator Based Assessment Methodology Proposal for the Identification of Domestic Systemically Important Banks within the Turkish Banking Sector

    OpenAIRE

    Ozge ULKUTAS SACCI; Guven SAYILGAN

    2014-01-01

    This study aims to identify domestic systemically important banks (D-SIB) operating within the Turkish Banking Sector. In this regard, adopting an indicator based assessment methodology together with the cluster analysis application, banks in the sample are classified in terms of their degree of systemic importance by using publicly available year-end data of 2012. The study has shown that a total of 7 banks with the highest systemic importance clustered away from the remaining 21 banks in th...

  17. Design-based research as a methodological approach to support participatory engagement of learners in the development of learning technologies

    OpenAIRE

    McDowell, James

    2015-01-01

    Following the origination of the design experiment as a mechanism to introduce learning interventions into the messy conditions of the classroom (Brown, 1992; Collins, 1992), design-based research (DBR) faced criticism from opposing paradigmatic camps before its acknowledgement as a promising methodology in which “formative evaluation plays a significant role” (Dede, Ketelhut, Whitehouse, Breit & McCloskey, 2009, p.16). \\ud \\ud This session presents a case study of a researcher-practitioner i...

  18. Post flood damage data collection and assessment in Albania based on DesInventar methodology

    Science.gov (United States)

    Toto, Emanuela; Massabo, Marco; Deda, Miranda; Rossello, Laura

    2015-04-01

    In 2013 in Albania was implemented a collection of disaster losses based on Desinventar. The DesInventar system consists in a methodology and software tool that lead to the systematic collection, documentation and analysis of loss data on disasters. The main sources of information about disasters used for the Albanian database were the Albanian Ministry of Internal Affairs, the National Library and the State archive. Specifically for floods the database created contains nearly 900 datasets, for a period of 148 years (from 1865 to 2013). The data are georeferenced on the administrative units of Albania: Region, Provinces and Municipalities. The datasets describe the events by reporting the date of occurrence, the duration, the localization in administrative units and the cause. Additional information regards the effects and damage that the event caused on people (deaths, injured, missing, affected, relocated, evacuated, victims) and on houses (houses damaged or destroyed). Other quantitative indicators are the losses in local currency or US dollars, the damage on roads, the crops affected , the lost cattle and the involvement of social elements over the territory such as education and health centers. Qualitative indicators simply register the sectors (e.g. transportations, communications, relief, agriculture, water supply, sewerage, power and energy, industries, education, health sector, other sectors) that were affected. Through the queries and analysis of the data collected it was possible to identify the most affected areas, the economic loss, the damage in agriculture, the houses and people affected and many other variables. The most vulnerable Regions for the past floods in Albania were studied and individuated, as well as the rivers that cause more damage in the country. Other analysis help to estimate the damage and losses during the main flood events of the recent years, occurred in 2010 and 2011, and to recognize the most affected sectors. The database was

  19. Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.

    Science.gov (United States)

    Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S

    2015-10-14

    The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.

  20. Depleted depletion drives polymer swelling in poor solvent mixtures.

    Science.gov (United States)

    Mukherji, Debashish; Marques, Carlos M; Stuehn, Torsten; Kremer, Kurt

    2017-11-09

    Establishing a link between macromolecular conformation and microscopic interaction is a key to understand properties of polymer solutions and for designing technologically relevant "smart" polymers. Here, polymer solvation in solvent mixtures strike as paradoxical phenomena. For example, when adding polymers to a solvent, such that all particle interactions are repulsive, polymer chains can collapse due to increased monomer-solvent repulsion. This depletion induced monomer-monomer attraction is well known from colloidal stability. A typical example is poly(methyl methacrylate) (PMMA) in water or small alcohols. While polymer collapse in a single poor solvent is well understood, the observed polymer swelling in mixtures of two repulsive solvents is surprising. By combining simulations and theoretical concepts known from polymer physics and colloidal science, we unveil the microscopic, generic origin of this collapse-swelling-collapse behavior. We show that this phenomenon naturally emerges at constant pressure when an appropriate balance of entropically driven depletion interactions is achieved.

  1. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  2. Vulnerability classification in man-land territorial system of mining cities based on triangle methodology

    Energy Technology Data Exchange (ETDEWEB)

    Xin Xin; Ping-Yu Zhang [Chinese Academy of Sciences, Changchun (China). Northeast Institute of Geography and Agricultural Ecology

    2009-02-15

    The vulnerabilities of 40 mining cities were classified into 6 types by triangle methodology and corresponding sustainable development suggestions were proposed. Most mining cities in east China are: resource and environment and social (RS) type and economy, resource, environment and social (ERS) type, with low vulnerability. In central China, most mining cities are economy and resource and environment (ER), RS and ERS types. The vulnerability in most west mining cities is high. Coal mining cities are mostly ER, RS and ERS types, while metal mining cities are almost ERS type. 11 refs., 4 figs., 3 tabs.

  3. Methodology and Applications in Non-linear Model-based Geostatistics

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund

    that are approximately Gaussian. Parameter estimation and prediction for the transformed Gaussian model is studied. In some cases a transformation cannot possibly render the data Gaussian. A methodology for analysing such data was introduced by Diggle, Tawn and Moyeed (1998): The generalised linear spatial model...... priors for Bayesian inference is discussed. Procedures for parameter estimation and prediction are studied. Theoretical properties of Markov chain Monte Carlo algorithms are investigated, and different algorithms are compared. In addition, the thesis contains a manual for an R-package, geoRglmm, which...

  4. Process synthesis for natural products from plants based on PAT methodology

    DEFF Research Database (Denmark)

    Malwade, Chandrakant Ramkrishna; Qu, Haiyan; Rong, Ben-Guang

    2017-01-01

    (QbD) approach, has been included at various steps to obtain molecular level information of process streams and thereby, support the rational decision making. The formulated methodology has been used to isolate and purify artemisinin, an antimalarial drug, from dried leaves of the plant Artemisia...... generates different process flowsheet alternatives consisting of multiple separation techniques. Decision making is supported by heuristics as well as basic process information already available from previous studies. In addition, process analytical technology (PAT) framework, a part of Quality by Design...

  5. A problem-based approach to teaching research methodology to medical graduates in Iran

    Directory of Open Access Journals (Sweden)

    Mehrdad Jalalian Hosseini

    2009-08-01

    Full Text Available Physicians are reticent to participate in research projects for avariety of reasons. Facilitating the active involvement ofdoctors in research projects is a high priority for the IranianBlood Transfusion Organization (IBTO. A one-month trainingcourse on research methodology was conducted for a groupof physicians in Mashhad, in northeast Iran. The participantswere divided in ten groups. They prepared a researchproposal under the guidance of a workshop leader. Thequality of the research proposals, which were prepared by allparticipants, went beyond our expectations. All of theresearch proposals were relevant to blood safety. In this briefreport we describe our approach.

  6. Self-Regulatory Capacities Are Depleted in a Domain-Specific Manner.

    Science.gov (United States)

    Zhang, Rui; Stock, Ann-Kathrin; Rzepus, Anneka; Beste, Christian

    2017-01-01

    Performing an act of self-regulation such as making decisions has been suggested to deplete a common limited resource, which impairs all subsequent self-regulatory actions (ego depletion theory). It has however remained unclear whether self-referred decisions truly impair behavioral control even in seemingly unrelated cognitive domains, and which neurophysiological mechanisms are affected by these potential depletion effects. In the current study, we therefore used an inter-individual design to compare two kinds of depletion, namely a self-referred choice-based depletion and a categorization-based switching depletion, to a non-depleted control group. We used a backward inhibition (BI) paradigm to assess the effects of depletion on task switching and associated inhibition processes. It was combined with EEG and source localization techniques to assess both behavioral and neurophysiological depletion effects. The results challenge the ego depletion theory in its current form: Opposing the theory's prediction of a general limited resource, which should have yielded comparable effects in both depletion groups, or maybe even a larger depletion in the self-referred choice group, there were stronger performance impairments following a task domain-specific depletion (i.e., the switching-based depletion) than following a depletion based on self-referred choices. This suggests at least partly separate and independent resources for various cognitive control processes rather than just one joint resource for all self-regulation activities. The implications are crucial to consider for people making frequent far-reaching decisions e.g., in law or economy.

  7. Quantitative Analysis of Survivin Protein Expression and Its Therapeutic Depletion by an Antisense Oligonucleotide in Human Lung Tumors

    Directory of Open Access Journals (Sweden)

    Anna L Olsen

    2012-01-01

    Full Text Available RNA-directed antisense and interference therapeutics are a promising treatment option for cancer. The demonstration of depletion of target proteins within human tumors in vivo using validated methodology will be a key to the application of this technology. Here, we present a flow cytometric-based approach to quantitatively determine protein levels in solid tumor material derived by fiber optic brushing (FOB of non-small cell lung cancer (NSCLC patients. Focusing upon the survivin protein, and its depletion by an antisense oligonucleotide (ASO (LY2181308, we show that we can robustly identify a subpopulation of survivin positive tumor cells in FOB samples, and, moreover, detect survivin depletion in tumor samples from a patient treated with LY2181308. Survivin depletion appears to be a result of treatment with this ASO, because a tumor treated with conventional cytotoxic chemotherapy did not exhibit a decreased percentage of survivin positive cells. Our approach is likely to be broadly applicable to, and useful for, the quantification of protein levels in tumor samples obtained as part of clinical trials and studies, facilitating the proof-of-principle testing of novel targeted therapies.

  8. A METHODOLOGY BASED ON AN ECOLOGICAL ECONOMY APPROACH FOR THE INTEGRATING MANAGEMENT OF THE SULPHUROUS WATER IN AN OIL REFINERY

    Directory of Open Access Journals (Sweden)

    Gabriel Orlando Lobelles Sardiñas

    2016-10-01

    Full Text Available Despite the current highly stringent international standards regulating the contaminating emissions to the environment, the Oil refinery of Cienfuegos is still generating liquid and gaseous emissions contaminating the environment. The construction of new units as part of the Refinery expansion leads to an increase of these emissions due to the lack of technologies for the reutilization of the sulphurous water. The objective of this paper is to propose a methodology for the integral management of the sulphurous residual water in the oil refining process, including the evaluation and selection of the most feasible technological variant to minimize the sulphur contamination of water and the resulting emissions during the process. The methodology is based on the ecological economy tools, allowing a comprehensible evaluation of six technological variants at the refinery of Cienfuegos. The Life Cycle Assessment was applied (ACV by its Spanish acronym, by means of the software SimaPro 7.1. It was evaluated through the Eco Speed Method, to minimize the possible uncertainty. An economic evaluation was performed, taking into account the external costs for a more comprehensive analysis, enabling, along with the ecological indicators, the selection of the best technological variant, achieving a methodology based on a comprehensive evaluation, and as a positive impact, the implementation of the chosen variant (V5, 98.27% of the process water was recovered, as well as the sulphur that recovered from 94 to 99.8 %, reducing the emissions from 12 200 to 120 mg/Nm3 as SO2.

  9. Assessment of current structural design methodology for high-temperature reactors based on failure tests

    International Nuclear Information System (INIS)

    Corum, J.M.; Sartory, W.K.

    1985-01-01

    A mature design methodology, consisting of inelastic analysis methods, provided in Department of Energy guidelines, and failure criteria, contained in ASME Code Case N-47, exists in the United States for high-temperature reactor components. The objective of this paper is to assess the adequacy of this overall methodology by comparing predicted inelastic deformations and lifetimes with observed results from structural failure tests and from an actual service failure. Comparisons are presented for three types of structural situations: (1) nozzle-to-spherical shell specimens, where stresses at structural discontinuities lead to cracking, (2) welded structures, where metallurgical discontinuities play a key role in failures, and (3) thermal shock loadings of cylinders and pipes, where thermal discontinuities can lead to failure. The comparison between predicted and measured inelastic responses are generally reasonalbly good; quantities are sometimes overpredicted somewhat, and, sometimes underpredicted. However, even seemingly small discrepancies can have a significant effect on structural life, and lifetimes are not always as closely predicted. For a few cases, the lifetimes are substantially overpredicted, which raises questions regarding the adequacy of existing design margins

  10. A new methodology for the determination of enzyme activity based on carbon nanotubes and glucose oxidase.

    Science.gov (United States)

    Yeşiller, Gülden; Sezgintürk, Mustafa Kemal

    2015-11-10

    In this research, a novel enzyme activity analysis methodology is introduced as a new perspective for this area. The activity of elastase enzyme, which is a digestive enzyme mostly of found in the digestive system of vertebrates, was determined by an electrochemical device composed of carbon nanotubes and a second enzyme, glucose oxidase, which was used as a signal generator enzyme. In this novel methodology, a complex bioactive layer was constructed by using carbon nanotubes, glucose oxidase and a supporting protein, gelatin on a solid, conductive substrate. The activity of elastase was determined by monitoring the hydrolysis rate of elastase enzyme in the bioactive layer. As a result of this hydrolysis of elastase, glucose oxidase was dissociated from the bioactive layer, and following this the electrochemical signal due to glucose oxidase was decreased. The progressive elastase-catalyzed digestion of the bioactive layer containing glucose oxidase decreased the layer's enzymatic efficiency, resulting in a decrease of the glucose oxidation current as a function of the enzyme activity. The ratio of the decrease was correlated to elastase activity level. In this study, optimization experiments of bioactive components and characterization of the resulting new electrochemical device were carried out. A linear calibration range from 0.0303U/mL to 0.0729U/mL of elastase was reported. Real sample analyses were also carried out by the new electrochemical device. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Spintronic logic design methodology based on spin Hall effect–driven magnetic tunnel junctions

    International Nuclear Information System (INIS)

    Kang, Wang; Zhang, Youguang; Zhao, Weisheng; Wang, Zhaohao; Klein, Jacques-Olivier; Lv, Weifeng

    2016-01-01

    Conventional complementary metal-oxide-semiconductor (CMOS) technology is now approaching its physical scaling limits to enable Moore’s law to continue. Spintronic devices, as one of the potential alternatives, show great promise to replace CMOS technology for next-generation low-power integrated circuits in nanoscale technology nodes. Until now, spintronic memory has been successfully commercialized. However spintronic logic still faces many critical challenges (e.g. direct cascading capability and small operation gain) before it can be practically applied. In this paper, we propose a standard complimentary spintronic logic (CSL) design methodology to form a CMOS-like logic design paradigm. Using the spin Hall effect (SHE)-driven magnetic tunnel junction (MTJ) device as an example, we demonstrate CSL implementation, functionality and performance. This logic family provides a unified design methodology for spintronic logic circuits and partly solves the challenges of direct cascading capability and small operation gain in the previously proposed spintronic logic designs. By solving a modified Landau–Lifshitz–Gilbert equation, the magnetization dynamics in the free layer of the MTJ is theoretically described and a compact electrical model is developed. With this electrical model, numerical simulations have been performed to evaluate the functionality and performance of the proposed CSL design. Simulation results demonstrate that the proposed CSL design paradigm is rather promising for low-power logic computing. (paper)

  12. Preparation of guides, on mechanics of fluids, for the physics teaching based on the investigatory methodology

    Energy Technology Data Exchange (ETDEWEB)

    Munoz, Loreto Mora; Buzzo, Ricardo; Martinez-Mardones, Javier; Romero, Angel [Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso Av. Brasil 2950, Valparaiso (Chile)], E-mail: jmartine@ucv.cl

    2008-11-01

    The challenges in the present educational reform emphasize the professional character of the teacher in the planning of classes, execution of activities and evaluation of learning. A set of planned activities is not to a class of science as a pile of thrown bricks is to a house; this is, that if it is not counted on the knowledge of the preconceptions of the students, the daily realities that they face and of the expectations that these have at the time of participating in the science classes, cannot be obtained the proposed objectives. The well-known investigatory method applied to the education of sciences approaches the conceptual contents from practical activities of easy reproduction that are guided in effective form towards the topics to teach. Guides OPPS (Operation Primary Physics Science), of Louisiana University, are excellent examples of the application of this methodology, as much in the material that corresponds to the students as in the material for the guide of the learning activities (call Leader of the class). This international experience, within the framework used of the Plans and Programs of the Ministry of Education of Chile (MINEDUC), is the main axis of this work in which the accomplishment of guides with this methodology considers, approaching contained of a unit of the common plan of physics for third grade of high school.

  13. ADM1-based methodology for the characterisation of the influent sludge in anaerobic reactors.

    Science.gov (United States)

    Huete, E; de Gracia, M; Ayesa, E; Garcia-Heras, J L

    2006-01-01

    This paper presents a systematic methodology to characterise the influent sludge in terms of the ADM1 components from the experimental measurements traditionally used in wastewater engineering. For this purpose, a complete characterisation of the model components in their elemental mass fractions and charge has been used, making a rigorous mass balance for all the process transformations and enabling the future connection with other unit-process models. It also makes possible the application of mathematical algorithms for the optimal characterisation of several components poorly defined in the ADM1 report. Additionally, decay and disintegration have been necessarily uncoupled so that the decay proceeds directly to hydrolysis instead of producing intermediate composites. The proposed methodology has been applied to the particular experimental work of a pilot-scale CSTR treating real sewage sludge, a mixture of primary and secondary sludge. The results obtained have shown a good characterisation of the influent reflected in good model predictions. However, its limitations for an appropriate prediction of alkalinity and carbon percentages in biogas suggest the convenience of including the elemental characterisation of the process in terms of carbon in the analytical program.

  14. Preparation of guides, on mechanics of fluids, for the physics teaching based on the investigatory methodology

    International Nuclear Information System (INIS)

    Munoz, Loreto Mora; Buzzo, Ricardo; Martinez-Mardones, Javier; Romero, Angel

    2008-01-01

    The challenges in the present educational reform emphasize the professional character of the teacher in the planning of classes, execution of activities and evaluation of learning. A set of planned activities is not to a class of science as a pile of thrown bricks is to a house; this is, that if it is not counted on the knowledge of the preconceptions of the students, the daily realities that they face and of the expectations that these have at the time of participating in the science classes, cannot be obtained the proposed objectives. The well-known investigatory method applied to the education of sciences approaches the conceptual contents from practical activities of easy reproduction that are guided in effective form towards the topics to teach. Guides OPPS (Operation Primary Physics Science), of Louisiana University, are excellent examples of the application of this methodology, as much in the material that corresponds to the students as in the material for the guide of the learning activities (call Leader of the class). This international experience, within the framework used of the Plans and Programs of the Ministry of Education of Chile (MINEDUC), is the main axis of this work in which the accomplishment of guides with this methodology considers, approaching contained of a unit of the common plan of physics for third grade of high school.

  15. A New Methodology Based on Imbalanced Classification for Predicting Outliers in Electricity Demand Time Series

    Directory of Open Access Journals (Sweden)

    Francisco Javier Duque-Pintor

    2016-09-01

    Full Text Available The occurrence of outliers in real-world phenomena is quite usual. If these anomalous data are not properly treated, unreliable models can be generated. Many approaches in the literature are focused on a posteriori detection of outliers. However, a new methodology to a priori predict the occurrence of such data is proposed here. Thus, the main goal of this work is to predict the occurrence of outliers in time series, by using, for the first time, imbalanced classification techniques. In this sense, the problem of forecasting outlying data has been transformed into a binary classification problem, in which the positive class represents the occurrence of outliers. Given that the number of outliers is much lower than the number of common values, the resultant classification problem is imbalanced. To create training and test sets, robust statistical methods have been used to detect outliers in both sets. Once the outliers have been detected, the instances of the dataset are labeled accordingly. Namely, if any of the samples composing the next instance are detected as an outlier, the label is set to one. As a study case, the methodology has been tested on electricity demand time series in the Spanish electricity market, in which most of the outliers were properly forecast.

  16. A Methodology for Mapping Meanings in Text-Based Sustainability Communication

    Directory of Open Access Journals (Sweden)

    Mark Brown

    2013-06-01

    Full Text Available In moving society towards more sustainable forms of consumption and production, social learning must play an important role. Making the assumption that it occurs as a consequence of changes in understanding, this article presents a methodology for mapping meanings in sustainability communication texts. The methodology uses techniques from corpus linguistics and framing theory. Two large databases of text were constructed by copying material down from the websites of two different groups of social actors: (i environmental NGOs and (ii British green business, and saving it as .txt files. The findings on individual words show that the NGOs and business use them very differently. Focusing on words expressing concern for the natural environment, it is proposed that the two actors also conceptualize their concern differently. Green business’s cognitive system of concern has two well-developed frames; good intentions and risk management. However, three frames—concern for the natural environment, perception of the damage, and responsibility, are light on detail. In contrast, within the NGOs’ system of concern, the frames of concern for the natural environment, perception of the damage and responsibility, contain words making detailed representations.

  17. PAGIS summary report of phase 1: a common methodological approach based on European data and models

    International Nuclear Information System (INIS)

    Cadelli, N.; Cottone, G.; Bertozzi, G.; Girardi, F.

    1984-01-01

    Since 1982 a joint study has been launched by the CEC with the participation of national institutions in the E.C., aiming at a Performance Assessment of Geological Isolation Systems (PAGIS) for HLW disposal. This document is a summary of the first phase of the study which was devoted to the collection of data and models and to the choice of an appropriate methodology. To this purpose, real or national sites have been chosen, which are representative of three types of continental geological formations in the E.C.: clay, granite and salt (although the choices imply no committment of any kind about their final use). Moreover, sub-seabed areas have also been identified. The study covers the following items: - basic data on waste characteristics, site data and repository designs; - methodology, which allows sensitivity and uncertainty analyses to be performed, as well as the assessment of radiation doses to individuals and populations; - preliminary modelling of radionuclide release and migration through the geosphere (near- and far-field) and the biosphere following their various pathways to man; - selection of the most relevant radionuclide release scenarios and their probability of occurrence. Reference values have been selected for the basic data as well as variants covering the various options which are under consideration in the different Countries of the E.C.

  18. Application of a methodology based on the Theory of Constraints in the sector of tourism services

    Directory of Open Access Journals (Sweden)

    Reyner Pérez Campdesuñer

    2017-04-01

    Full Text Available Purpose: The objective of the research was aimed at achieving the implementation of the theory of constraints on the operating conditions of a hotel, which differs by its characteristics of traditional processes that have applied this method, from the great heterogeneity of resources needed to meet the demand of customers. Design/methodology/approach: To achieve this purpose, a method of generating conversion equations that allowed to express all the resources of the organization under study depending on the number of customers to serve facilitating comparison between different resources and estimated demand through techniques developed traditional forecasting, these features were integrated into the classical methodology of theory of constraints. Findings: The application of tools designed for hospitality organizations allowed to demonstrate the applicability of the theory of constraints on entities under conditions different from the usual, develop a set of conversion equations of different resources facilitating comparison with demand and consequently achieve improve levels of efficiency and effectiveness of the organization. Originality/value: The originality of the research is summarized in the application of the theory of constraints in a very different from the usual conditions, covering 100% of the processes and resources in hospitality organizations.

  19. A methodology based in particle swarm optimization algorithm for preventive maintenance focused in reliability and cost

    International Nuclear Information System (INIS)

    Luz, Andre Ferreira da

    2009-01-01

    In this work, a Particle Swarm Optimization Algorithm (PSO) is developed for preventive maintenance optimization. The proposed methodology, which allows the use flexible intervals between maintenance interventions, instead of considering fixed periods (as usual), allows a better adaptation of scheduling in order to deal with the failure rates of components under aging. Moreover, because of this flexibility, the planning of preventive maintenance becomes a difficult task. Motivated by the fact that the PSO has proved to be very competitive compared to other optimization tools, this work investigates the use of PSO as an alternative tool of optimization. Considering that PSO works in a real and continuous space, it is a challenge to use it for discrete optimization, in which scheduling may comprise variable number of maintenance interventions. The PSO model developed in this work overcome such difficulty. The proposed PSO searches for the best policy for maintaining and considers several aspects, such as: probability of needing repair (corrective maintenance), the cost of such repairs, typical outage times, costs of preventive maintenance, the impact of maintaining the reliability of systems as a whole, and the probability of imperfect maintenance. To evaluate the proposed methodology, we investigate an electro-mechanical system consisting of three pumps and four valves, High Pressure Injection System (HPIS) of a PWR. Results show that PSO is quite efficient in finding the optimum preventive maintenance policies for the HPIS. (author)

  20. Assertion based verification methodology for HDL designs of primary sodium pump speed and eddy current flow measurement systems of PFBR

    International Nuclear Information System (INIS)

    Misra, M.K.; Menon, Saritha P.; Thirugnana Murthy, D.

    2013-01-01

    With the growing complexity and size of digital designs, functional verification has become a huge challenge. The validation and testing process accounts for a significant percentage of the overall development effort and cost for electronic systems. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected upfront, they can result in severe consequences - both financially and from a safety viewpoint. This paper covers the various types of verification methodologies and focuses on Assertion Based Verification Methodology for HDL designs, taking as case studies, the Primary Sodium Pump Speed and Eddy Current Flow Measurement Systems of PFBR. (author)

  1. Comparative Analysis of VERA Depletion Problems

    International Nuclear Information System (INIS)

    Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations

  2. Quantifying price risk of electricity retailer based on CAPM and RAROC methodology

    International Nuclear Information System (INIS)

    Karandikar, R.G.; Khaparde, S.A.; Kulkarni, S.V.

    2007-01-01

    In restructured electricity markets, electricity retailers set up contracts with generation companies (GENCOs) and with end users to meet their load requirements at agreed upon tariff. The retailers invest consumer payments as capital in the volatile competitive market. In this paper, a model for quantifying price risk of electricity retailer is proposed. An IEEE 30 Bus test system is used to demonstrate the model. The Capital Asset Pricing Model (CAPM) is demonstrated to determine the retail electricity price for the end users. The factor Risk Adjusted Recovery on Capital (RAROC) is used to quantify the price risk involved. The methodology proposed in this paper can be used by retailer while submitting proposal for electricity tariff to the regulatory authority. (author)

  3. The Case of Value Based Communication—Epistemological and Methodological Reflections from a System Theoretical Perspective

    Directory of Open Access Journals (Sweden)

    Victoria von Groddeck

    2010-09-01

    Full Text Available The aim of this paper is to reflect the epistemological and methodological aspects of an empirical research study which analyzes the phenomenon of increased value communication within business organizations from a system theoretical perspective in the tradition of Niklas LUHMANN. Drawing on the theoretical term of observation it shows how a research perspective can be developed which opens up the scope for an empirical analysis of communication practices. This analysis focuses on the reconstruction of these practices by first understanding how these practices stabilize themselves and second by contrasting different practices to educe an understanding of different forms of observation of the relevant phenomenon and of the functions of these forms. Thus, this approach combines system theoretical epistemology, analytical research strategies, such as form and functional analysis, and qualitative research methods, such as narrative interviews, participant observation and document analysis. URN: urn:nbn:de:0114-fqs1003177

  4. On Competitiveness of Nearest-Neighbor-Based Music Classification: A Methodological Critique

    DEFF Research Database (Denmark)

    Pálmason, Haukur; Jónsson, Björn Thór; Amsaleg, Laurent

    2017-01-01

    The traditional role of nearest-neighbor classification in music classification research is that of a straw man opponent for the learning approach of the hour. Recent work in high-dimensional indexing has shown that approximate nearest-neighbor algorithms are extremely scalable, yielding results...... of reasonable quality from billions of high-dimensional features. With such efficient large-scale classifiers, the traditional music classification methodology of aggregating and compressing the audio features is incorrect; instead the approximate nearest-neighbor classifier should be given an extensive data...... collection to work with. We present a case study, using a well-known MIR classification benchmark with well-known music features, which shows that a simple nearest-neighbor classifier performs very competitively when given ample data. In this position paper, we therefore argue that nearest...

  5. Quantifying price risk of electricity retailer based on CAPM and RAROC methodology

    Energy Technology Data Exchange (ETDEWEB)

    Karandikar, R.G.; Khaparde, S.A.; Kulkarni, S.V. [Electrical Engineering Department, Indian Institute of Technology Bombay, Mumbai 400 076 (India)

    2007-12-15

    In restructured electricity markets, electricity retailers set up contracts with generation companies (GENCOs) and with end users to meet their load requirements at agreed upon tariff. The retailers invest consumer payments as capital in the volatile competitive market. In this paper, a model for quantifying price risk of electricity retailer is proposed. An IEEE 30 Bus test system is used to demonstrate the model. The Capital Asset Pricing Model (CAPM) is demonstrated to determine the retail electricity price for the end users. The factor Risk Adjusted Recovery on Capital (RAROC) is used to quantify the price risk involved. The methodology proposed in this paper can be used by retailer while submitting proposal for electricity tariff to the regulatory authority. (author)

  6. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Science.gov (United States)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  7. A Capacitance-Based Methodology for the Estimation of Piezoelectric Coefficients of Poled Piezoelectric Materials

    KAUST Repository

    Al Ahmad, Mahmoud

    2010-10-04

    A methodology is proposed to estimate the piezoelectric coefficients of bulk piezoelectric materials using simple capacitance measurements. The extracted values of d33 and d31 from the capacitance measurements were 506 pC/N and 247 pC/N, respectively. The d33 value is in agreement with that obtained from the Berlincourt method, which gave a d33 value of 500 pC/N. In addition, the d31 value is in agreement with the value obtained from the optical method, which gave a d 31 value of 223 pC/V. These results suggest that the proposed method is a viable way to quickly estimate piezoelectric coefficients of bulk unclamped samples. © 2010 The Electrochemical Society.

  8. Delirium diagnosis methodology used in research: a survey-based study.

    Science.gov (United States)

    Neufeld, Karin J; Nelliot, Archana; Inouye, Sharon K; Ely, E Wesley; Bienvenu, O Joseph; Lee, Hochang Benjamin; Needham, Dale M

    2014-12-01

    To describe methodology used to diagnose delirium in research studies evaluating delirium detection tools. The authors used a survey to address reference rater methodology for delirium diagnosis, including rater characteristics, sources of patient information, and diagnostic process, completed via web or telephone interview according to respondent preference. Participants were authors of 39 studies included in three recent systematic reviews of delirium detection instruments in hospitalized patients. Authors from 85% (N = 33) of the 39 eligible studies responded to the survey. The median number of raters per study was 2.5 (interquartile range: 2-3); 79% were physicians. The raters' median duration of clinical experience with delirium diagnosis was 7 years (interquartile range: 4-10), with 5% having no prior clinical experience. Inter-rater reliability was evaluated in 70% of studies. Cognitive tests and delirium detection tools were used in the delirium reference rating process in 61% (N = 21) and 45% (N = 15) of studies, respectively, with 33% (N = 11) using both and 27% (N = 9) using neither. When patients were too drowsy or declined to participate in delirium evaluation, 70% of studies (N = 23) used all available information for delirium diagnosis, whereas 15% excluded such patients. Significant variability exists in reference standard methods for delirium diagnosis in published research. Increasing standardization by documenting inter-rater reliability, using standardized cognitive and delirium detection tools, incorporating diagnostic expert consensus panels, and using all available information in patients declining or unable to participate with formal testing may help advance delirium research by increasing consistency of case detection and improving generalizability of research results. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. Export Potential of the Enterprise: Essence and Methodological Bases of the Analysis

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2017-03-01

    Full Text Available The article considers theoretical and methodological aspects of the analysis of the enterprise’s export potential and the methodological basis for its measurement. Analyzing and summarizing scientific works on the problem, the views of researchers on the definition of the concept of “export potential of the enterprise” are systematized. The article considers the economic content of the enterprise’s export potential from the standpoint of the system-structural approach defining it as a complex systemic formation of interrelated and interacting elements of economic and non-economic origin, internal and external action. It is found out that in the international economic space the export potential of the enterprise acquires new qualitative features reflecting not just the resource potential of the national economic entity but also the needs and interests of foreign countries and their economic agents. It is identified that the functional role of the export potential is to implement the targets of the foreign economic activity of the enterprise. The nature of these targets can be different and is formed on the principle of ensuring the needs of external markets. The level of satisfaction of these needs by an individual enterprise can be evaluated through such indicators as the volume of exports, the quality of exported products, the level of export diversification, which determine the result of the export activity and in relation to its purpose serve as a criterion of the efficiency of the enterprise’s export potential. As a result of the study, the components of the export potential of the enterprise are singled out, and a model of their interrelationships is presented. The prospects of the research are connected with branch aspects of the formation of the enterprise’s export potential allowing to highlight its structural elements and directions of development.

  10. Methodology for Assessing the Quality of Agribusiness Activity Based on the Environmentally Responsible Approach

    Directory of Open Access Journals (Sweden)

    Anna Antonovna Anfinogentova

    2017-06-01

    Full Text Available The article is devoted to the research and development of quality evaluation methods of agro-industrial enterprises activity in the regional economy with the use of the ecological approach. The hypothesis of the study is that the activity of the economic entities (as well as of agribusiness must be assessed not only in the context of economic efficiency and effectiveness, but also in the context of environmental ethics and environmental aggression. As the initial data, we have used the indicators of economic statistics of Russian agrarian-oriented regions, as well as the data received from management reporting on the sample of enterprises of three regions (the Belgorod and Moscow regions, Krasnodar Territory. The article offers the economic and mathematical approach for measuring the level of the environmental responsibility of agro-industrial enterprises on the basic formula of the Mandelbrot set and statistical indicator of Hurst. Our scientific contribution is the development of a modified methodology for assessing the quality of the activity of agro-industrial enterprises using the parameter characterizing the level of environmental ethics and environmental aggression of these entities. The main result of the study is the approbation of the method, which has shown its practical applicability and relative coherence with certain indicators of regional ecological statistics. The proposed method is characterized by the integration of the different mathematical approaches and as an adaptive assessment tool that can be used to assess the quality of the activity of both agro-industrial enterprises and enterprises of other industries and fields of the economy. In the further works, the authors plan to develop methodological approaches to the assessment of the quality of agro-industrial products. At the same time, the main attention will be paid to the ecological and social component of the quality.

  11. Voltage regulation in MV networks with dispersed generations by a neural-based multiobjective methodology

    Energy Technology Data Exchange (ETDEWEB)

    Galdi, Vincenzo [Dipartimento di Ingegneria dell' Informazione e Ingegneria Elettrica, Universita degli studi di Salerno, Via Ponte Don Melillo 1, 84084 Fisciano (Italy); Vaccaro, Alfredo; Villacci, Domenico [Dipartimento di Ingegneria, Universita degli Studi del Sannio, Piazza Roma 21, 82100 Benevento (Italy)

    2008-05-15

    This paper puts forward the role of learning techniques in addressing the problem of an efficient and optimal centralized voltage control in distribution networks equipped with dispersed generation systems (DGSs). The proposed methodology employs a radial basis function network (RBFN) to identify the multidimensional nonlinear mapping between a vector of observable variables describing the network operating point and the optimal set points of the voltage regulating devices. The RBFN is trained by numerical data generated by solving the voltage regulation problem for a set of network operating points by a rigorous multiobjective solution methodology. The RBFN performance is continuously monitored by a supervisor process that notifies when there is the need of a more accurate solution of the voltage regulation problem if nonoptimal network operating conditions (ex post monitoring) or excessive distances between the actual network state and the neuron's centres (ex ante monitoring) are detected. A more rigorous problem solution, if required, can be obtained by solving the voltage regulation problem by a conventional multiobjective optimization technique. This new solution, in conjunction with the corresponding input vector, is then adopted as a new train data sample to adapt the RBFN. This online training process allows RBFN to (i) adaptively learn the more representative domain space regions of the input/output mapping without needing a prior knowledge of a complete and representative training set, and (ii) manage effectively any time varying phenomena affecting this mapping. The results obtained by simulating the regulation policy in the case of a medium-voltage network are very promising. (author)

  12. A methodology based on openEHR archetypes and software agents for developing e-health applications reusing legacy systems.

    Science.gov (United States)

    Cardoso de Moraes, João Luís; de Souza, Wanderley Lopes; Pires, Luís Ferreira; do Prado, Antonio Francisco

    2016-10-01

    In Pervasive Healthcare, novel information and communication technologies are applied to support the provision of health services anywhere, at anytime and to anyone. Since health systems may offer their health records in different electronic formats, the openEHR Foundation prescribes the use of archetypes for describing clinical knowledge in order to achieve semantic interoperability between these systems. Software agents have been applied to simulate human skills in some healthcare procedures. This paper presents a methodology, based on the use of openEHR archetypes and agent technology, which aims to overcome the weaknesses typically found in legacy healthcare systems, thereby adding value to the systems. This methodology was applied in the design of an agent-based system, which was used in a realistic healthcare scenario in which a medical staff meeting to prepare a cardiac surgery has been supported. We conducted experiments with this system in a distributed environment composed by three cardiology clinics and a center of cardiac surgery, all located in the city of Marília (São Paulo, Brazil). We evaluated this system according to the Technology Acceptance Model. The case study confirmed the acceptance of our agent-based system by healthcare professionals and patients, who reacted positively with respect to the usefulness of this system in particular, and with respect to task delegation to software agents in general. The case study also showed that a software agent-based interface and a tools-based alternative must be provided to the end users, which should allow them to perform the tasks themselves or to delegate these tasks to other people. A Pervasive Healthcare model requires efficient and secure information exchange between healthcare providers. The proposed methodology allows designers to build communication systems for the message exchange among heterogeneous healthcare systems, and to shift from systems that rely on informal communication of actors to

  13. Comparative evaluation of seven commercial products for human serum enrichment/depletion by shotgun proteomics.

    Science.gov (United States)

    Pisanu, Salvatore; Biosa, Grazia; Carcangiu, Laura; Uzzau, Sergio; Pagnozzi, Daniela

    2018-08-01

    Seven commercial products for human serum depletion/enrichment were tested and compared by shotgun proteomics. Methods were based on four different capturing agents: antibodies (Qproteome Albumin/IgG Depletion kit, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, Top 2 Abundant Protein Depletion Spin Columns, and Top 12 Abundant Protein Depletion Spin Columns), specific ligands (Albumin/IgG Removal), mixture of antibodies and ligands (Albumin and IgG Depletion SpinTrap), and combinatorial peptide ligand libraries (ProteoMiner beads), respectively. All procedures, to a greater or lesser extent, allowed an increase of identified proteins. ProteoMiner beads provided the highest number of proteins; Albumin and IgG Depletion SpinTrap and ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit resulted the most efficient in albumin removal; Top 2 and Top 12 Abundant Protein Depletion Spin Columns decreased the overall immunoglobulin levels more than other procedures, whereas specifically gamma immunoglobulins were mostly removed by Albumin and IgG Depletion SpinTrap, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, and Top 2 Abundant Protein Depletion Spin Columns. Albumin/IgG Removal, a resin bound to a mixture of protein A and Cibacron Blue, behaved less efficiently than the other products. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Issues in Stratospheric Ozone Depletion.

    Science.gov (United States)

    Lloyd, Steven Andrew

    Following the announcement of the discovery of the Antarctic ozone hole in 1985 there have arisen a multitude of questions pertaining to the nature and consequences of polar ozone depletion. This thesis addresses several of these specific questions, using both computer models of chemical kinetics and the Earth's radiation field as well as laboratory kinetic experiments. A coupled chemical kinetic-radiative numerical model was developed to assist in the analysis of in situ field measurements of several radical and neutral species in the polar and mid-latitude lower stratosphere. Modeling was used in the analysis of enhanced polar ClO, mid-latitude diurnal variation of ClO, and simultaneous measurements of OH, HO_2, H_2 O and O_3. Most importantly, such modeling was instrumental in establishing the link between the observed ClO and BrO concentrations in the Antarctic polar vortex and the observed rate of ozone depletion. The principal medical concern of stratospheric ozone depletion is that ozone loss will lead to the enhancement of ground-level UV-B radiation. Global ozone climatology (40^circS to 50^ circN latitude) was incorporated into a radiation field model to calculate the biologically accumulated dosage (BAD) of UV-B radiation, integrated over days, months, and years. The slope of the annual BAD as a function of latitude was found to correspond to epidemiological data for non-melanoma skin cancers for 30^circ -50^circN. Various ozone loss scenarios were investigated. It was found that a small ozone loss in the tropics can provide as much additional biologically effective UV-B as a much larger ozone loss at higher latitudes. Also, for ozone depletions of > 5%, the BAD of UV-B increases exponentially with decreasing ozone levels. An important key player in determining whether polar ozone depletion can propagate into the populated mid-latitudes is chlorine nitrate, ClONO_2 . As yet this molecule is only indirectly accounted for in computer models and field

  15. Considerations of the Software Metric-based Methodology for Software Reliability Assessment in Digital I and C Systems

    International Nuclear Information System (INIS)

    Ha, J. H.; Kim, M. K.; Chung, B. S.; Oh, H. C.; Seo, M. R.

    2007-01-01

    Analog I and C systems have been replaced by digital I and C systems because the digital systems have many potential benefits to nuclear power plants in terms of operational and safety performance. For example, digital systems are essentially free of drifts, have higher data handling and storage capabilities, and provide improved performance by accuracy and computational capabilities. In addition, analog replacement parts become more difficult to obtain since they are obsolete and discontinued. There are, however, challenges to the introduction of digital technology into the nuclear power plants because digital systems are more complex than analog systems and their operation and failure modes are different. Especially, software, which can be the core of functionality in the digital systems, does not wear out physically like hardware and its failure modes are not yet defined clearly. Thus, some researches to develop the methodology for software reliability assessment are still proceeding in the safety-critical areas such as nuclear system, aerospace and medical devices. Among them, software metric-based methodology has been considered for the digital I and C systems of Korean nuclear power plants. Advantages and limitations of that methodology are identified and requirements for its application to the digital I and C systems are considered in this study

  16. ICP-MS/MS-Based Ionomics: A Validated Methodology to Investigate the Biological Variability of the Human Ionome.

    Science.gov (United States)

    Konz, Tobias; Migliavacca, Eugenia; Dayon, Loïc; Bowman, Gene; Oikonomidi, Aikaterini; Popp, Julius; Rezzi, Serge

    2017-05-05

    We here describe the development, validation and application of a quantitative methodology for the simultaneous determination of 29 elements in human serum using state-of-the-art inductively coupled plasma triple quadrupole mass spectrometry (ICP-MS/MS). This new methodology offers high-throughput elemental profiling using simple dilution of minimal quantity of serum samples. We report the outcomes of the validation procedure including limits of detection/quantification, linearity of calibration curves, precision, recovery and measurement uncertainty. ICP-MS/MS-based ionomics was used to analyze human serum of 120 older adults. Following a metabolomic data mining approach, the generated ionome profiles were subjected to principal component analysis revealing gender and age-specific differences. The ionome of female individuals was marked by higher levels of calcium, phosphorus, copper and copper to zinc ratio, while iron concentration was lower with respect to male subjects. Age was associated with lower concentrations of zinc. These findings were complemented with additional readouts to interpret micronutrient status including ceruloplasmin, ferritin and inorganic phosphate. Our data supports a gender-specific compartmentalization of the ionome that may reflect different bone remodelling in female individuals. Our ICP-MS/MS methodology enriches the panel of validated "Omics" approaches to study molecular relationships between the exposome and the ionome in relation with nutrition and health.

  17. Radiological Risk Assessment of Capstone Depleted Uranium Aerosols

    International Nuclear Information System (INIS)

    Hahn, Fletcher; Roszell, Laurie E.; Daxon, Eric G.; Guilmette, Ray A.; Parkhurst, MaryAnn

    2009-01-01

    Assessment of the health risk from exposure to aerosols of depleted uranium (DU) is an important outcome of the Capstone aerosol studies that established exposure ranges to personnel in armored combat vehicles perforated by DU munitions. Although the radiation exposure from DU is low, there is concern that DU deposited in the body may increase cancer rates. Radiation doses to various organs of the body resulting from the inhalation of DU aerosols measured in the Capstone studies were calculated using International Commission on Radiological Protection (ICRP) models. Organs and tissues with the highest calculated committed equivalent 50-yr doses were lung and extrathoracic tissues (nose and nasal passages, pharynx, larynx, mouth and thoracic lymph nodes). Doses to the bone surface and kidney were about 5 to 10% of the doses to the extrathoracic tissues. The methodologies of the ICRP International Steering Committee on Radiation Standards (ISCORS) were used for determining the whole body cancer risk. Organ-specific risks were estimated using ICRP and U.S. Environmental Protection Agency (EPA) methodologies. Risks for crew members and first responders were determined for selected scenarios based on the time interval of exposure and for vehicle and armor type. The lung was the organ with the highest cancer mortality risk, accounting for about 97% of the risks summed from all organs. The highest mean lifetime risk for lung cancer for the scenario with the longest exposure time interval (2 h) was 0.42%. This risk is low compared with the natural or background risk of 7.35%. These risks can be significantly reduced by using an existing ventilation system (if operable) and by reducing personnel time in the vehicle immediately after perforation

  18. Methodology for developing evidence-based clinical imaging guidelines: Joint recommendations by Korea society of radiology and national evidence-based healthcare collaborating agency

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sol Ji; Jo, Ae Jeong; Choi, Jin A [Div. for Healthcare Technology Assessment Research, National Evidence-Based Healthcare Collaborating Agency, Seoul (Korea, Republic of); and others

    2017-01-15

    This paper is a summary of the methodology including protocol used to develop evidence-based clinical imaging guidelines (CIGs) in Korea, led by the Korean Society of Radiology and the National Evidence-based Healthcare Collaborating Agency. This is the first protocol to reflect the process of developing diagnostic guidelines in Korea. The development protocol is largely divided into the following sections: set-up, process of adaptation, and finalization. The working group is composed of clinical imaging experts, and the developmental committee is composed of multidisciplinary experts to validate the methodology. The Korean CIGs will continue to develop based on this protocol, and these guidelines will act for decision supporting tools for clinicians as well as reduce medical radiation exposure.

  19. Methodology for developing evidence-based clinical imaging guidelines: Joint recommendations by Korea society of radiology and national evidence-based healthcare collaborating agency

    International Nuclear Information System (INIS)

    Choi, Sol Ji; Jo, Ae Jeong; Choi, Jin A

    2017-01-01

    This paper is a summary of the methodology including protocol used to develop evidence-based clinical imaging guidelines (CIGs) in Korea, led by the Korean Society of Radiology and the National Evidence-based Healthcare Collaborating Agency. This is the first protocol to reflect the process of developing diagnostic guidelines in Korea. The development protocol is largely divided into the following sections: set-up, process of adaptation, and finalization. The working group is composed of clinical imaging experts, and the developmental committee is composed of multidisciplinary experts to validate the methodology. The Korean CIGs will continue to develop based on this protocol, and these guidelines will act for decision supporting tools for clinicians as well as reduce medical radiation exposure

  20. Exposure to nature counteracts aggression after depletion.

    Science.gov (United States)

    Wang, Yan; She, Yihan; Colarelli, Stephen M; Fang, Yuan; Meng, Hui; Chen, Qiuju; Zhang, Xin; Zhu, Hongwei

    2018-01-01

    Acts of self-control are more likely to fail after previous exertion of self-control, known as the ego depletion effect. Research has shown that depleted participants behave more aggressively than non-depleted participants, especially after being provoked. Although exposure to nature (e.g., a walk in the park) has been predicted to replenish resources common to executive functioning and self-control, the extent to which exposure to nature may counteract the depletion effect on aggression has yet to be determined. The present study investigated the effects of exposure to nature on aggression following depletion. Aggression was measured by the intensity of noise blasts participants delivered to an ostensible opponent in a competition reaction-time task. As predicted, an interaction occurred between depletion and environmental manipulations for provoked aggression. Specifically, depleted participants behaved more aggressively in response to provocation than non-depleted participants in the urban condition. However, provoked aggression did not differ between depleted and non-depleted participants in the natural condition. Moreover, within the depletion condition, participants in the natural condition had lower levels of provoked aggression than participants in the urban condition. This study suggests that a brief period of nature exposure may restore self-control and help depleted people regain control over aggressive urges. © 2017 Wiley Periodicals, Inc.