WorldWideScience

Sample records for methods modeling demonstrated

  1. Rapid Energy Modeling Workflow Demonstration

    Science.gov (United States)

    2013-10-31

    trail at AutodeskVasari.com Considered a lightweight version of Revit for energy modeling and analysis Many capabilities are in process of...Journal of Hospitality & Tourism Research 32(1):3-21. DOD (2005) Energy Managers Handbook. Retrieved from www.wbdg.org/ccb/DOD/DOD4/dodemhb.pdf

  2. Proceedings of the workshop on review of dose modeling methods for demonstration of compliance with the radiological criteria for license termination

    International Nuclear Information System (INIS)

    Nicholson, T.J.; Parrott, J.D.

    1998-05-01

    The workshop was one in a series to support NRC staff development of guidance for implementing the final rule on ''Radiological Criteria for License Termination.'' The workshop topics included discussion of: dose models used for decommissioning reviews; identification of criteria for evaluating the acceptability of dose models; and selection of parameter values for demonstrating compliance with the final rule. The 2-day public workshop was jointly organized by RES and NMSS staff responsible for reviewing dose modeling methods used in decommissioning reviews. The workshop was noticed in the Federal Register (62 FR 51706). The workshop presenters included: NMSS and RES staff, who discussed both dose modeling needs for licensing reviews, and development of guidance related to dose modeling and parameter selection needs; DOE national laboratory scientists, who provided responses to earlier NRC staff-developed questions and discussed their various Federally-sponsored dose models (i.e., DandD, RESRAD, and MEPAS codes); and an EPA scientist, who presented details on the EPA dose assessment model (i.e., PRESTO code). The workshop was formatted to provide opportunities for the attendees to observe computer demonstrations of the dose codes presented. More than 120 workshop attendees from NRC Headquarters and the Regions, Agreement States; as well as industry representatives and consultants; scientists from EPA, DOD, DNFSB, DOE, and the national laboratories; and interested members of the public participated. A complete transcript of the workshop, including viewgraphs and attendance lists, is available in the NRC Public Document Room. This NUREG/CP documents the formal presentations made during the workshop, and provides a preface outlining the workshop's focus, objectives, background, topics and questions provided to the invited speakers, and those raised during the panel discussion. NUREG/CP-0163 also provides technical bases supporting the development of decommissioning

  3. Introduction to Methods Demonstrations for Authentication

    International Nuclear Information System (INIS)

    Kouzes, Richard T.; Hansen, Randy R.; Pitts, W. K.

    2002-01-01

    During the Trilateral Initiative Technical Workshop on Authentication and Certification, PNNL will demonstrate some authentication technologies. This paper briefly describes the motivation for these demonstrations and provide background on them

  4. Modeling framework for representing long-term effectiveness of best management practices in addressing hydrology and water quality problems: Framework development and demonstration using a Bayesian method

    Science.gov (United States)

    Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta

    2018-05-01

    Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.

  5. Background Model for the Majorana Demonstrator

    Science.gov (United States)

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  6. Demonstration model of LEP bending magnet

    CERN Multimedia

    CERN PhotoLab

    1981-01-01

    To save iron and raise the flux density, the LEP bending magnet laminations were separated by spacers and the space between the laminations was filled with concrete. This is a demonstration model, part of it with the spaced laminations only, the other part filled with concrete.

  7. Three-dimensional one-way bubble tracking method for the prediction of developing bubble-slug flows in a vertical pipe. 1st report, models and demonstration

    International Nuclear Information System (INIS)

    Tamai, Hidesada; Tomiyama, Akio

    2004-01-01

    A three-dimensional one-way bubble tracking method is one of the most promising numerical methods for the prediction of a developing bubble flow in a vertical pipe, provided that several constitutive models are prepared. In this study, a bubble shape, an equation of bubble motion, a liquid velocity profile, a pressure field, turbulent fluctuation and bubble coalescence are modeled based on available knowledge on bubble dynamics. Bubble shapes are classified into four types in terms of bubble equivalent diameter. A wake velocity model is introduced to simulate approaching process among bubbles due to wake entrainment. Bubble coalescence is treated as a stochastic phenomenon with the aid of coalescence probabilities that depend on the sizes of two interacting bubbles. The proposed method can predict time-spatial evolution of flow pattern in a developing bubble-slug flow. (author)

  8. Demonstration and evaluation of a method for assessing mediated moderation.

    Science.gov (United States)

    Morgan-Lopez, Antonio A; MacKinnon, David P

    2006-02-01

    Mediated moderation occurs when the interaction between two variables affects a mediator, which then affects a dependent variable. In this article, we describe the mediated moderation model and evaluate it with a statistical simulation using an adaptation of product-of-coefficients methods to assess mediation. We also demonstrate the use of this method with a substantive example from the adolescent tobacco literature. In the simulation, relative bias (RB) in point estimates and standard errors did not exceed problematic levels of +/- 10% although systematic variability in RB was accounted for by parameter size, sample size, and nonzero direct effects. Power to detect mediated moderation effects appears to be severely compromised under one particular combination of conditions: when the component variables that make up the interaction terms are correlated and partial mediated moderation exists. Implications for the estimation of mediated moderation effects in experimental and nonexperimental research are discussed.

  9. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  10. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  11. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  12. Demonstrating sustainable energy: A review-based model of sustainable energy demonstration projects

    NARCIS (Netherlands)

    Bossink, Bart

    2017-01-01

    This article develops a model of sustainable energy demonstration projects, based on a review of 229 scientific publications on demonstrations in renewable and sustainable energy. The model addresses the basic organizational characteristics (aim, cooperative form, and physical location) and learning

  13. Demonstrations in Solute Transport Using Dyes: Part II. Modeling.

    Science.gov (United States)

    Butters, Greg; Bandaranayake, Wije

    1993-01-01

    A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)

  14. Buried Waste Integrated Demonstration stakeholder involvement model

    International Nuclear Information System (INIS)

    Kaupanger, R.M.; Kostelnik, K.M.; Milam, L.M.

    1994-04-01

    The Buried Waste Integrated Demonstration (BWID) is a program funded by the US Department of Energy (DOE) Office of Technology Development. BWID supports the applied research, development, demonstration, and evaluation of a suite of advanced technologies that together form a comprehensive remediation system for the effective and efficient remediation of buried waste. Stakeholder participation in the DOE Environmental Management decision-making process is critical to remediation efforts. Appropriate mechanisms for communication with the public, private sector, regulators, elected officials, and others are being aggressively pursued by BWID to permit informed participation. This document summarizes public outreach efforts during FY-93 and presents a strategy for expanded stakeholder involvement during FY-94

  15. Rapid Energy Modeling Workflow Demonstration Project

    Science.gov (United States)

    2014-01-01

    app FormIt for conceptual modeling with further refinement available in Revit or Vasari. Modeling can also be done in Revit (detailed and conceptual...referenced building model while in the field. • Autodesk® Revit is a BIM software application with integrated energy and carbon analyses driven by Green...FormIt, Revit and Vasari, and (3) comparative analysis. The energy results of these building analyses are represented as annual energy use for natural

  16. Facility Modeling Capability Demonstration Summary Report

    International Nuclear Information System (INIS)

    Key, Brian P.; Sadasivan, Pratap; Fallgren, Andrew James; Demuth, Scott Francis; Aleman, Sebastian E.; Almeida, Valmor F. de; Chiswell, Steven R.; Hamm, Larry; Tingey, Joel M.

    2017-01-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration's (NNSA's) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  17. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  18. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  19. Demonstration test of in-service inspection methods

    International Nuclear Information System (INIS)

    Takumi, Kenji

    1987-01-01

    The major objectives of the project are: (1) to demonstrate the reliability of a manual ultrasonic flaw detector and techniques that are used in operating light water reactor plants and (2) to demonstrate the performance and reliability of an automatic ultrasonic flaw detector that is designed to shorten the time required for ISI work and reduce the exposure risk of inspection personnel. The test project consists of three stages. In the first stage, which ended in 1982, defects were added intentionally to a model structure the same in size as a typical 1.1 million kW BWR plant and manual ultrasonic flaw detection tensting was performed. In the second stage, completed in 1984, automatic eddy-current flaw detection testing was carried out for defects in heat transfer piping of a PWP steam generator. In the third stage, which started in 1981 and ended in March 1987, a newly developed automatic ultrasonic flaw detector is applied to testing of defects used for the manual detector performance evaluation. Results have shown that the automatic eddy-current flaw detector under test has an adequately stable performance for practical uses, with a very high reproducibility to permit close inspection of secular deterioration in heat transfer pipes. It has also revealed that both the manual and automatic ultrasonic flaw detectors under test can detect all defects that do not comply with the ASME standards. (Nogami, K.)

  20. Demonstration recommendations for accelerated testing of concrete decontamination methods

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.

    1995-12-01

    A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.

  1. Demonstration recommendations for accelerated testing of concrete decontamination methods

    International Nuclear Information System (INIS)

    Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.

    1995-12-01

    A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are 137 Cs, 238 U (and its daughters), 60 Co, 90 Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 x 10 8 ft 2 or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling

  2. Demonstration of the gypsy moth energy budget microclimate model

    Science.gov (United States)

    D. E. Anderson; D. R. Miller; W. E. Wallner

    1991-01-01

    The use of a "User friendly" version of "GMMICRO" model to quantify the local environment and resulting core temperature of GM larvae under different conditions of canopy defoliation, different forest sites, and different weather conditions was demonstrated.

  3. Optics Demonstration with Student Eyeglasses Using the Inquiry Method

    Science.gov (United States)

    James, Mark C.

    2011-01-01

    A favorite qualitative optics demonstration I perform in introductory physics classes makes use of students' eyeglasses to introduce converging and diverging lenses. Taking on the persona of a magician, I walk to the back of the classroom and approach a student wearing glasses. The top part of Fig. 1 shows a glasses-wearing student who is…

  4. Childhood Obesity Research Demonstration project: Cross-site evaluation method

    Science.gov (United States)

    The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which th...

  5. DEMONSTRATION BULLETIN: COLLOID POLISHING FILTER METHOD - FILTER FLOW TECHNOLOGY, INC.

    Science.gov (United States)

    The Filter Flow Technology, Inc. (FFT) Colloid Polishing Filter Method (CPFM) was tested as a transportable, trailer mounted, system that uses sorption and chemical complexing phenomena to remove heavy metals and nontritium radionuclides from water. Contaminated waters can be pro...

  6. A demonstrated method for upgrading existing control room interiors

    International Nuclear Information System (INIS)

    Brice, R.M.; Terrill, D.; Brice, R.M.

    1991-01-01

    The main control room (MCR) of any nuclear power plant can justifiably be called the most important area staffed by personnel in the entire facility. The interior workstation configuration, equipment arrangement, and staff placement all affect the efficiency and habitability of the room. There are many guidelines available that describe various human factor principles to use when upgrading the environment of the MCR. These involve anthropometric standards and rules for placement of peripheral equipment. Due to the variations in plant design, however, hard-and-fast rules have not and cannot be standardized for retrofits in any significant way. How then does one develop criteria for the improvement of a MCR? The purpose of this paper is to discuss, from the designer's point of view, a method for the collection of information, development of criteria, and creation of a final design for a MCR upgrade. This method is best understood by describing the successful implementation at Tennessee Valley Authority's Sequoyah nuclear plant

  7. Acting Locally: A Guide to Model, Community and Demonstration Forests.

    Science.gov (United States)

    Keen, Debbie Pella

    1993-01-01

    Describes Canada's efforts in sustainable forestry, which refers to management practices that ensure long-term health of forest ecosystems so that they can continue to provide environmental, social, and economic benefits. Describes model forests, community forests, and demonstration forests and lists contacts for each of the projects. (KS)

  8. Modeling of Solid State Transformer for the FREEDM System Demonstration

    Science.gov (United States)

    Jiang, Youyuan

    The Solid State Transformer (SST) is an essential component in the FREEDM system. This research focuses on the modeling of the SST and the controller hardware in the loop (CHIL) implementation of the SST for the support of the FREEDM system demonstration. The energy based control strategy for a three-stage SST is analyzed and applied. A simplified average model of the three-stage SST that is suitable for simulation in real time digital simulator (RTDS) has been developed in this study. The model is also useful for general time-domain power system analysis and simulation. The proposed simplified av-erage model has been validated in MATLAB and PLECS. The accuracy of the model has been verified through comparison with the cycle-by-cycle average (CCA) model and de-tailed switching model. These models are also implemented in PSCAD, and a special strategy to implement the phase shift modulation has been proposed to enable the switching model simulation in PSCAD. The implementation of the CHIL test environment of the SST in RTDS is described in this report. The parameter setup of the model has been discussed in detail. One of the dif-ficulties is the choice of the damping factor, which is revealed in this paper. Also the grounding of the system has large impact on the RTDS simulation. Another problem is that the performance of the system is highly dependent on the switch parameters such as voltage and current ratings. Finally, the functionalities of the SST have been realized on the platform. The distributed energy storage interface power injection and reverse power flow have been validated. Some limitations are noticed and discussed through the simulation on RTDS.

  9. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  10. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  11. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...... of validating finite element (FE) simulations and gaining a better understanding of HTS machines. 3D FE simulations of the machine are compared to measured current vs. voltage (IV) curves for the tape on its own. It is validated that this method can be used to predict the critical current of the HTS tape...... installed in the machine. The measured torque as a function of rotor position is also reproduced by the 3D FE model....

  12. TRAC methods and models

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  13. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  14. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    International Nuclear Information System (INIS)

    Martin, William G.K.; Hasekamp, Otto P.

    2018-01-01

    Highlights: • We demonstrate adjoint methods for atmospheric remote sensing in a two-dimensional setting. • Searchlight functions are used to handle the singularity of measurement response functions. • Adjoint methods require two radiative transfer calculations to evaluate the measurement misfit function and its derivatives with respect to all unknown parameters. • Synthetic retrieval studies show the scalability of adjoint methods to problems with thousands of measurements and unknown parameters. • Adjoint methods and the searchlight function technique are generalizable to 3D remote sensing. - Abstract: In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also

  15. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    Science.gov (United States)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote

  16. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  17. Demonstration of a forward iterative method to reconstruct brachytherapy seed configurations from x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Martin J; Todor, Dorin A [Department of Radiation Oncology, Virginia Commonwealth University, Richmond VA 23298 (United States)

    2005-06-07

    By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.

  18. Cytolethal Distending Toxin Demonstrates Genotoxic Activity in a Yeast Model

    OpenAIRE

    Hassane, Duane C.; Lee, Robert B.; Mendenhall, Michael D.; Pickett, Carol L.

    2001-01-01

    Cytolethal distending toxins (CDTs) are multisubunit proteins produced by a variety of bacterial pathogens that cause enlargement, cell cycle arrest, and apoptosis in mammalian cells. While their function remains uncertain, recent studies suggest that they can act as intracellular DNases in mammalian cells. Here we establish a novel yeast model for understanding CDT-associated disease. Expression of the CdtB subunit in yeast causes a G2/M arrest, as seen in mammalian cells. CdtB toxicity is n...

  19. A demonstration of dose modeling at Yucca Mountain

    International Nuclear Information System (INIS)

    Miley, T.B.; Eslinger, P.W.

    1992-11-01

    The U. S. Environmental Protection Agency is currently revising the regulatory guidance for high-level nuclear waste disposal. In its draft form, the guidelines contain dose limits. Since this is likely to be the case in the final regulations, it is essential that the US Department of Energy be prepared to calculate site-specific doses for any potential repository location. This year, Pacific Northwest Laboratory (PNL) has made a first attempt to estimate doses for the potential geologic repository at Yucca Mountain, Nevada as part of a preliminary total-systems performance assessment. A set of transport scenarios was defined to assess the cumulative release of radionuclides over 10,000 years under undisturbed and disturbed conditions at Yucca Mountain. Dose estimates were provided for several of the transport scenarios modeled. The exposure scenarios used to estimate dose in this total-systems exercise should not, however, be considered a definitive set of scenarios for determining the risk of the potential repository. Exposure scenarios were defined for waterborne and surface contamination that result from both undisturbed and disturbed performance of the potential repository. The exposure scenarios used for this analysis were designed for the Hanford Site in Washington. The undisturbed performance scenarios for which exposures were modeled are gas-phase release of 14 C to the surface and natural breakdown of the waste containers with waterborne release. The disturbed performance scenario for which doses were estimated is exploratory drilling. Both surface and waterborne contamination were considered for the drilling intrusion scenario

  20. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  1. Demonstration of improved seismic source inversion method of tele-seismic body wave

    Science.gov (United States)

    Yagi, Y.; Okuwaki, R.

    2017-12-01

    Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.

  2. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  3. Models and methods in thermoluminescence

    International Nuclear Information System (INIS)

    Furetta, C.

    2005-01-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  4. Models and methods in thermoluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)

    2005-07-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  5. Effectiveness of Video Demonstration over Conventional Methods in Teaching Osteology in Anatomy.

    Science.gov (United States)

    Viswasom, Angela A; Jobby, Abraham

    2017-02-01

    Technology and its applications are the most happening things in the world. So, is it in the field of medical education. This study was an evaluation of whether the conventional methods can compete with the test of technology. A comparative study of traditional method of teaching osteology in human anatomy with an innovative visual aided method. The study was conducted on 94 students admitted to MBBS 2014 to 2015 batch of Travancore Medical College. The students were divided into two academically validated groups. They were taught using conventional and video demonstrational techniques in a systematic manner. Post evaluation tests were conducted. Analysis of the mark pattern revealed that the group taught using traditional method scored better when compared to the visual aided method. Feedback analysis showed that, the students were able to identify bony features better with clear visualisation and three dimensional view when taught using the video demonstration method. The students identified visual aided method as the more interesting one for learning which helped them in applying the knowledge gained. In most of the questions asked, the two methods of teaching were found to be comparable on the same scale. As the study ends, we discover that, no new technique can be substituted for time tested techniques of teaching and learning. The ideal method would be incorporating newer multimedia techniques into traditional classes.

  6. Effect of Demonstration Method of Teaching on Students' Achievement in Agricultural Science

    Science.gov (United States)

    Daluba, Noah Ekeyi

    2013-01-01

    The study investigated the effect of demonstration method of teaching on students' achievement in agricultural science in secondary school in Kogi East Education Zone of Kogi State. Two research questions and one hypothesis guided the study. The study employed a quasi-experimental research design. The population for the study was 18225 senior…

  7. Location, Location, Location! Demonstrating the Mnemonic Benefit of the Method of Loci

    Science.gov (United States)

    McCabe, Jennifer A.

    2015-01-01

    Classroom demonstrations of empirically supported learning and memory strategies have the potential to boost students' knowledge about their own memory and convince them to change the way they approach memory tasks in and beyond the classroom. Students in a "Human Learning and Memory" course learned about the "Method of Loci"…

  8. MANIKIN DEMONSTRATION IN TEACHING CONSERVATIVE MANAGEMENT OF POSTPARTUM HAEMORRHAGE: A COMPARISON WITH CONVENTIONAL METHODS

    Directory of Open Access Journals (Sweden)

    Sathi Mangalam Saraswathi

    2016-07-01

    Full Text Available BACKGROUND Even though there are many innovative methods to make classes more interesting and effective, in my department, topics are taught mainly by didactic lectures. This study attempts to compare the effectiveness of manikin demonstration and didactic lectures in teaching conservative management of post-partum haemorrhage. OBJECTIVE To compare the effectiveness of manikin demonstration and didactic lectures in teaching conservative management of postpartum haemorrhage. MATERIALS AND METHODS This is an observational study. Eighty four ninth-semester MBBS students posted in Department of Obstetrics and Gynaecology, Government Medical College, Kottayam were selected. They were divided into 2 groups by lottery method. Pre-test was conducted for both groups. Group A was taught by manikin demonstration. Group B was taught by didactic lecture. Feedback response from the students collected after demonstration class was analysed. Post-test was conducted for both the groups after one week. Gain in knowledge of both the groups were calculated from pre-test and post-test scores and compared by Independent sample t test. RESULTS The mean gain in knowledge in group A was 6.4 when compared to group B which is 4.3 and the difference was found to be statistically significant. All of the students in group A felt satisfied and more confident after the class and wanted more topics to be taken by demonstration. CONCLUSION Manikin demonstration class is more effective in teaching conservative management of post-partum haemorrhage and this method can be adopted to teach similar topics in clinical subjects.

  9. SRC-I demonstration plant analytical laboratory methods manual. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.; Skinner, R.W.; Znaimer, S.

    1983-03-01

    This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.

  10. [Method of immunocytochemical demonstration of cholinergic neurons in the central nervous system of laboratory animals].

    Science.gov (United States)

    Korzhevskiĭ, D E; Grigor'ev, I P; Kirik, O V; Zelenkova, N M; Sukhorukova, E G

    2013-01-01

    A protocol of immunocytochemical demonstration of choline acetyltransferase (ChAT), a key enzyme of acetylcholine synthesis, in paraffin sections of the brain of some laboratory animals, is presented. The method is simple, gives fairly reproducible results and allows for demonstration of ChAT in neurons, nerve fibers, and terminals in preparations of at least three species of laboratory animals including rat, rabbit, and cat. Different kinds of fixation (10% formalin, 4% paraformaldehyde, or zinc-ethanol-formaldehyde) were found suitable for immunocytochemical visualization of ChAT, however, optimal results were obtained with the application of zinc-ethanol-formaldehyde

  11. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  12. Modelling and Simulation of National Electronic Product Code Network Demonstrator Project

    Science.gov (United States)

    Mo, John P. T.

    The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.

  13. Graphite Isotope Ratio Method Development Report: Irradiation Test Demonstration of Uranium as a Low Fluence Indicator

    International Nuclear Information System (INIS)

    Reid, B.D.; Gerlach, D.C.; Love, E.F.; McNeece, J.P.; Livingston, J.V.; Greenwood, L.R.; Petersen, S.L.; Morgan, W.C.

    1999-01-01

    This report describes an irradiation test designed to investigate the suitability of uranium as a graphite isotope ratio method (GIRM) low fluence indicator. GIRM is a demonstrated concept that gives a graphite-moderated reactor's lifetime production based on measuring changes in the isotopic ratio of elements known to exist in trace quantities within reactor-grade graphite. Appendix I of this report provides a tutorial on the GIRM concept

  14. A demonstration of mixed-methods research in the health sciences.

    Science.gov (United States)

    Katz, Janet; Vandermause, Roxanne; McPherson, Sterling; Barbosa-Leiker, Celestina

    2016-11-18

    Background The growth of patient, community and population-centred nursing research is a rationale for the use of research methods that can examine complex healthcare issues, not only from a biophysical perspective, but also from cultural, psychosocial and political viewpoints. This need for multiple perspectives requires mixed-methods research. Philosophy and practicality are needed to plan, conduct, and make mixed-methods research more broadly accessible to the health sciences research community. The traditions and dichotomy between qualitative and quantitative research makes the application of mixed methods a challenge. Aim To propose an integrated model for a research project containing steps from start to finish, and to use the unique strengths brought by each approach to meet the health needs of patients and communities. Discussion Mixed-methods research is a practical approach to inquiry, that focuses on asking questions and how best to answer them to improve the health of individuals, communities and populations. An integrated model of research begins with the research question(s) and moves in a continuum. The lines dividing methods do not dissolve, but become permeable boundaries where two or more methods can be used to answer research questions more completely. Rigorous and expert methodologists work together to solve common problems. Conclusion Mixed-methods research enables discussion among researchers from varied traditions. There is a plethora of methodological approaches available. Combining expertise by communicating across disciplines and professions is one way to tackle large and complex healthcare issues. Implications for practice The model presented in this paper exemplifies the integration of multiple approaches in a unified focus on identified phenomena. The dynamic nature of the model signals a need to be open to the data generated and the methodological directions implied by findings.

  15. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  16. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  17. A miniature research vessel: A small-scale ocean-exploration demonstration of geophysical methods

    Science.gov (United States)

    Howell, S. M.; Boston, B.; Sleeper, J. D.; Cameron, M. E.; Togia, H.; Anderson, A.; Sigurdardottir, T. D.; Tree, J. P.

    2015-12-01

    Graduate student members of the University of Hawaii Geophysical Society have designed a small-scale model research vessel (R/V) that uses sonar to create 3D maps of a model seafloor in real-time. A pilot project was presented to the public at the School of Ocean and Earth Science and Technology's (SOEST) Biennial Open House weekend in 2013 and, with financial support from the Society of Exploration Geophysicists and National Science Foundation, was developed into a full exhibit for the same event in 2015. Nearly 8,000 people attended the two-day event, including children and teachers from Hawaii's schools, home school students, community groups, families, and science enthusiasts. Our exhibit demonstrates real-time sonar mapping of a cardboard volcano using a toy size research vessel on a programmable 2-dimensional model ship track suspended above a model seafloor. Ship waypoints were wirelessly sent from a Windows Surface tablet to a large-touchscreen PC that controlled the exhibit. Sound wave travel times were recorded using an ultrasonic emitter/receiver attached to an Arduino microcontroller platform and streamed through a USB connection to the control PC running MatLab, where a 3D model was updated as the ship collected data. Our exhibit demonstrates the practical use of complicated concepts, like wave physics, survey design, and data processing in a way that the youngest elementary students are able to understand. It provides an accessible avenue to learn about sonar mapping, and could easily be adapted to talk about bat and marine mammal echolocation by replacing the model ship and volcano. The exhibit received an overwhelmingly positive response from attendees and incited discussions that covered a broad range of earth science topics.

  18. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    Science.gov (United States)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  19. A comparison of methods for demonstrating artificial bone lesions; conventional versus computer tomography

    International Nuclear Information System (INIS)

    Heller, M.; Wenk, M.; Jend, H.H.

    1984-01-01

    Conventional tomography (T) and computer tomography (CT) were used for examining 97 artificial bone lesions at various sites. The purpose of the study was to determine how far CT can replace T in the diagnosis of skeletal abnormalities. The results have shown that modern CT, particularly in its high resolution form, equals T and provides additional information (substrate of a lesion, its relationship to neighbouring tissues, simultaneous demonstration of soft tissue etc.). These cannot be shown successfully by T. It follows that CT is indicated as the primary method of examination for lesions of the facial skeleton, skull base, spine, pelvis and, to some extent, extremities. (orig.) [de

  20. Demonstration of the improved PID method for the accurate temperature control of ADRs

    International Nuclear Information System (INIS)

    Shinozaki, K.; Hoshino, A.; Ishisaki, Y.; Mihara, T.

    2006-01-01

    Microcalorimeters require extreme stability (-bar 10μK) of thermal bath at low temperature (∼100mK). We have developed a portable adiabatic demagnetization refrigerator (ADR) system for ground experiments with TES microcalorimeters, in which we observed residual temperature between aimed and measured values when magnet current was controlled with the standard Proportional, Integral, and Derivative control (PID) method. The difference increases in time as the magnet current decreases. This phenomenon can be explained by the theory of the magnetic cooling, and we have introduced a new functional parameter to improve the PID method. With this improvement, long-term stability of the ADR temperature about 10μK rms is obtained up to the period of ∼15ks down to almost zero magnet current. We briefly describe our ADR system and principle of the improved PID method, showing the temperature control result. It is demonstrated that the controlled time of the aimed temperature can be extended by about 30% longer than the standard PID method in our system. The improved PID method is considered to be of great advantage especially in the range of small magnet current

  1. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  2. A low-cost approach for rapidly creating demonstration models for hands-on learning

    Science.gov (United States)

    Kinzli, Kristoph-Dietrich; Kunberger, Tanya; O'Neill, Robert; Badir, Ashraf

    2018-01-01

    Demonstration models allow students to readily grasp theory and relate difficult concepts and equations to real life. However drawbacks of using these demonstration models are that they are can be costly to purchase from vendors or take a significant amount of time to build. These two limiting factors can pose a significant obstacle for adding demonstrations to the curriculum. This article presents an assignment to overcome these obstacles, which has resulted in 36 demonstration models being added to the curriculum. The article also presents the results of student performance on course objectives as a result of the developed models being used in the classroom. Overall, significant improvement in student learning outcomes, due to the addition of demonstration models, has been observed.

  3. Comparison of marine spatial planning methods in Madagascar demonstrates value of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Thomas F Allnutt

    Full Text Available The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value. The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative

  4. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  5. ADOxx Modelling Method Conceptualization Environment

    Directory of Open Access Journals (Sweden)

    Nesat Efendioglu

    2017-04-01

    Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.

  6. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  7. Iterative method for Amado's model

    International Nuclear Information System (INIS)

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  8. Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial

    Science.gov (United States)

    Jackson, Christopher; Jit, Mark; Sharples, Linda; DeAngelis, Daniela

    2016-01-01

    Summary Decision-analytic models must often be informed using data which are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters, and how those parameters are related to quantities of interest for decision-making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data, and at the same time, propagates all parameter uncertainties to the results used for decision-making. To illustrate these methods, the authors demonstrate how a previously-developed Markov model for the progression of human papillomavirus (HPV16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV16 and HPV16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified, but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasise the appropriate choice of prior distributions and checking and comparison of fitted models. PMID:23886677

  9. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  10. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  11. animation : An R Package for Creating Animations and Demonstrating Statistical Methods

    Directory of Open Access Journals (Sweden)

    Yihui Xie

    2013-04-01

    Full Text Available Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (resampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required.Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject.

  12. Computer programs of information processing of nuclear physical methods as a demonstration material in studying nuclear physics and numerical methods

    Science.gov (United States)

    Bateev, A. B.; Filippov, V. P.

    2017-01-01

    The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.

  13. Derivation and experimental demonstration of the perturbed reactivity method for the determination of subcriticality

    International Nuclear Information System (INIS)

    Kwok, K.S.; Bernard, J.A.; Lanning, D.D.

    1992-01-01

    The perturbed reactivity method is a general technique for the estimation of reactivity. It is particularly suited to the determination of a reactor's initial degree of subcriticality and was developed to facilitate the automated startup of both spacecraft and multi-modular reactors using model-based control laws. It entails perturbing a shutdown reactor by the insertion of reactivity at a known rate and then estimating the initial degree of subcriticality from observation of the resulting reactor period. While similar to inverse kinetics, the perturbed reactivity method differs in that the net reactivity present in the core is treated as two separate entities. The first is that associated with the known perturbation. This quantity, together with the observed period and the reactor's describing parameters, are the inputs to the method's implementing algorithm. The second entity, which is the algorithm;s output, is the sum of all other reactivities including those resulting from inherent feedback and the initial degree of subcriticality. During an automated startup, feedback effects will be minimal. Hence, when applied to a shutdown reactor, the output of the perturbed reactivity method will be a constant that is equal to the initial degree of subcriticality. This is a major advantage because repeated estimates can be made of this one quantity and signal smoothing techniques can be applied to enhance accuracy. In addition to describing the theoretical basis for the perturbed reactivity method, factors involved in its implementation such as the movement of control devices other than those used to create the perturbation, source estimation, and techniques for data smoothing are presented

  14. MMB-GUI: a fast morphing method demonstrates a possible ribosomal tRNA translocation trajectory.

    Science.gov (United States)

    Tek, Alex; Korostelev, Andrei A; Flores, Samuel Coulbourn

    2016-01-08

    Easy-to-use macromolecular viewers, such as UCSF Chimera, are a standard tool in structural biology. They allow rendering and performing geometric operations on large complexes, such as viruses and ribosomes. Dynamical simulation codes enable modeling of conformational changes, but may require considerable time and many CPUs. There is an unmet demand from structural and molecular biologists for software in the middle ground, which would allow visualization combined with quick and interactive modeling of conformational changes, even of large complexes. This motivates MMB-GUI. MMB uses an internal-coordinate, multiscale approach, yielding as much as a 2000-fold speedup over conventional simulation methods. We use Chimera as an interactive graphical interface to control MMB. We show how this can be used for morphing of macromolecules that can be heterogeneous in biopolymer type, sequence, and chain count, accurately recapitulating structural intermediates. We use MMB-GUI to create a possible trajectory of EF-G mediated gate-passing translocation in the ribosome, with all-atom structures. This shows that the GUI makes modeling of large macromolecules accessible to a wide audience. The morph highlights similarities in tRNA conformational changes as tRNA translocates from A to P and from P to E sites and suggests that tRNA flexibility is critical for translocation completion. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  16. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  17. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  18. A mechanistic model for electricity consumption on dairy farms: Definition, validation, and demonstration

    NARCIS (Netherlands)

    Upton, J.R.; Murphy, M.; Shallo, L.; Groot Koerkamp, P.W.G.; Boer, de I.J.M.

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on

  19. The development and demonstration of integrated models for the evaluation of severe accident management strategies - SAMEM

    International Nuclear Information System (INIS)

    Ang, M.L.; Peers, K.; Kersting, E.; Fassmann, W.; Tuomisto, H.; Lundstroem, P.; Helle, M.; Gustavsson, V.; Jacobsson, P.

    2001-01-01

    This study is concerned with the further development of integrated models for the assessment of existing and potential severe accident management (SAM) measures. This paper provides a brief summary of these models, based on Probabilistic Safety Assessment (PSA) methods and the Risk Oriented Accident Analysis Methodology (ROAAM) approach, and their application to a number of case studies spanning both preventive and mitigative accident management regimes. In the course of this study it became evident that the starting point to guide the selection of methodology and any further improvement is the intended application. Accordingly, such features as the type and area of application and the confidence requirement are addressed in this project. The application of an integrated ROAAM approach led to the implementation, at the Loviisa NPP, of a hydrogen mitigation strategy, which requires substantial plant modifications. A revised level 2 PSA model was applied to the Sizewell B NPP to assess the feasibility of the in-vessel retention strategy. Similarly the application of PSA based models was extended to the Barseback and Ringhals 2 NPPs to improve the emergency operating procedures, notably actions related to manual operations. A human reliability analysis based on the Human Cognitive Reliability (HCR) and Technique For Human Error Rate (THERP) models was applied to a case study addressing secondary and primary bleed and feed procedures. Some aspects pertinent to the quantification of severe accident phenomena were further examined in this project. A comparison of the applications of PSA based approach and ROAAM to two severe accident issues, viz hydrogen combustion and in-vessel retention, was made. A general conclusion is that there is no requirement for further major development of the PSA and ROAAM methodologies in the modelling of SAM strategies for a variety of applications as far as the technical aspects are concerned. As is demonstrated in this project, the

  20. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  1. Effectiveness of Demonstration and Lecture Methods in Learning Concept in Economics among Secondary School Students in Borno State, Nigeria

    Science.gov (United States)

    Muhammad, Amin Umar; Bala, Dauda; Ladu, Kolomi Mutah

    2016-01-01

    This study investigated the Effectiveness of Demonstration and Lecture Methods in Learning concepts in Economics among Secondary School Students in Borno state, Nigeria. Five objectives: to determine the effectiveness of demonstration method in learning economics concepts among secondary school students in Borno state, determine the effectiveness…

  2. 40 CFR 63.9915 - What test methods and other procedures must I use to demonstrate initial compliance with dioxin...

    Science.gov (United States)

    2010-07-01

    ... must I use to demonstrate initial compliance with dioxin/furan emission limits? 63.9915 Section 63.9915....9915 What test methods and other procedures must I use to demonstrate initial compliance with dioxin... limit for dioxins/furans in Table 1 to this subpart, you must follow the test methods and procedures...

  3. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    J. Wesley Hines; Belle R. Upadhyaya; J. Michael Doster; Robert M. Edwards; Kenneth D. Lewis; Paul Turinsky; Jamie Coble

    2011-05-31

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  4. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Demonstration Exercise of a Validated Sample Collection Method for Powders Suspected of Being Biological Agents in Georgia 2006

    International Nuclear Information System (INIS)

    Marsh, B.

    2007-01-01

    August 7, 2006 the state of Georgia conducted a collaborative sampling exercise between the Georgia National Guard 4th Civil Support Team Weapons of Mass Destruction (CST-WMD) and the Georgia Department of Human Resources Division of Public Health demonstrating a recently validated bulk powder sampling method. The exercise was hosted at the Federal Law Enforcement Training Center (FLETC) at Glynn County, Georgia and involved the participation of the Georgia Emergency Management Agency (GEMA), Georgia National Guard, Georgia Public Health Laboratories, the Federal Bureau of Investigation Atlanta Office, Georgia Coastal Health District, and the Glynn County Fire Department. The purpose of the exercise was to demonstrate a recently validated national sampling standard developed by the American Standards and Test Measures (ASTM) International; ASTM E2458 S tandard Practice for Bulk Sample Collection and Swab Sample Collection of Visible Powders Suspected of Being Biological Agents from Nonporous Surfaces . The intent of the exercise was not to endorse the sampling method, but to develop a model for exercising new sampling methods in the context of existing standard operating procedures (SOPs) while strengthening operational relationships between response teams and analytical laboratories. The exercise required a sampling team to respond real-time to an incident cross state involving a clandestine bio-terrorism production lab found within a recreational vehicle (RV). Sample targets consisted of non-viable gamma irradiated B. anthracis Sterne spores prepared by Dugway Proving Ground. Various spore concentration levels were collected by the ASTM method, followed by on- and off-scene analysis utilizing the Center for Disease Control (CDC) Laboratory Response Network (LRN) and National Guard Bureau (NGB) CST mobile Analytical Laboratory Suite (ALS) protocols. Analytical results were compared and detailed surveys of participant evaluation comments were examined. I will

  6. A mechanistic model for electricity consumption on dairy farms: Definition, validation, and demonstration

    OpenAIRE

    Upton, J.R.; Murphy, M.; Shallo, L.; Groot Koerkamp, P.W.G.; Boer, de, I.J.M.

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical d...

  7. Demonstration of finite element simulations in MOOSE using crystallographic models of irradiation hardening and plastic deformation

    Energy Technology Data Exchange (ETDEWEB)

    Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wen, Wei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martinez Saez, Enrique [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-31

    This report describes the implementation of a crystal plasticity framework (VPSC) for irradiation hardening and plastic deformation in the finite element code, MOOSE. Constitutive models for irradiation hardening and the crystal plasticity framework are described in a previous report [1]. Here we describe these models briefly and then describe an algorithm for interfacing VPSC with finite elements. Example applications of tensile deformation of a dog bone specimen and a 3D pre-irradiated bar specimen performed using MOOSE are demonstrated.

  8. Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines

    Science.gov (United States)

    2017-05-01

    Engineering Chemistry Fundamentals, Vol. 5, No. 3, 1966, pp. 356–363. [14] Burns, R. A., Development of scalar and velocity imaging diagnostics...in an Aero- Engine Model Combustor at Elevated Pressure Using URANS and Finite- Rate Chemistry ,” 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference...FINAL REPORT Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines SERDP Project WP-2151

  9. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  10. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    Science.gov (United States)

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  11. Exploration of a Method to Assess Children's Understandings of a Phenomenon after Viewing a Demonstration Show

    Science.gov (United States)

    DeKorver, Brittland K.; Choi, Mark; Towns, Marcy

    2017-01-01

    Chemical demonstration shows are a popular form of informal science education (ISE), employed by schools, museums, and other institutions in order to improve the public's understanding of science. Just as teachers employ formative and summative assessments in the science classroom to evaluate the impacts of their efforts, it is important to assess…

  12. Laboratory Demonstration of Low-Cost Method for Producing Thin Film on Nonconductors.

    Science.gov (United States)

    Ebong, A. U.; And Others

    1991-01-01

    A low-cost procedure for metallizing a silicon p-n junction diode by electroless nickel plating is reported. The procedure demonstrates that expensive salts can be excluded without affecting the results. The experimental procedure, measurement, results, and discussion are included. (Author/KR)

  13. Demonstration of a collimated in situ method for determining depth distributions using gamma-ray spectrometry

    CERN Document Server

    Benke, R R

    2002-01-01

    In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...

  14. Rationale, Design, and Methods for Process Evaluation in the Childhood Obesity Research Demonstration Project.

    Science.gov (United States)

    Joseph, Sitara; Stevens, Andria M; Ledoux, Tracey; O'Connor, Teresia M; O'Connor, Daniel P; Thompson, Debbe

    2015-01-01

    The cross-site process evaluation plan for the Childhood Obesity Research Demonstration (CORD) project is described here. The CORD project comprises 3 unique demonstration projects designed to integrate multi-level, multi-setting health care and public health interventions over a 4-year funding period. Three different communities in California, Massachusetts, and Texas. All CORD demonstration projects targeted 2-12-year-old children whose families are eligible for benefits under Title XXI (CHIP) or Title XIX (Medicaid). The CORD projects were developed independently and consisted of evidence-based interventions that aim to prevent childhood obesity. The interventions promote healthy behaviors in children by applying strategies in 4 key settings (primary care clinics, early care and education centers, public schools, and community institutions). The CORD process evaluation outlined 3 main outcome measures: reach, dose, and fidelity, on 2 levels (researcher to provider, and provider to participant). The plan described here provides insight into the complex nature of process evaluation for consortia of independently designed multi-level, multi-setting intervention studies. The process evaluation results will provide contextual information about intervention implementation and delivery with which to interpret other aspects of the program. Copyright © 2015 Society for Nutrition Education and Behavior. All rights reserved.

  15. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  16. CT-Sellink - a new method for demonstrating the gut wall

    International Nuclear Information System (INIS)

    Thiele, J.; Kloeppel, R.; Schulz, H.G.

    1993-01-01

    34 patients were examined by CT following a modified enema (CT-Sellink) in order to demonstrate the gut. By introducing a 'gut index' it is possible to define the tone of the gut providing its folds remain constant. By means of a radial density profile the gut wall can be defined objectively and in numerical terms. Gut wall thickness in the small bowel averaged 1.2 mm with a density of 51 Hu and gut wall thickness in the colon averaged 2 mm with a density of 59 Hu. (orig.) [de

  17. A new teaching model for demonstrating the movement of the extraocular muscles.

    Science.gov (United States)

    Iwanaga, Joe; Refsland, Jason; Iovino, Lee; Holley, Gary; Laws, Tyler; Oskouian, Rod J; Tubbs, R Shane

    2017-09-01

    The extraocular muscles consist of the superior, inferior, lateral, and medial rectus muscles and the superior and inferior oblique muscles. This study aimed to create a new teaching model for demonstrating the function of the extraocular muscles. A coronal section of the head was prepared and sutures attached to the levator palpebral superioris muscle and six extraocular muscles. Tension was placed on each muscle from a posterior approach and movement of the eye documented from an anterior view. All movements were clearly seen less than that of the inferior rectus muscle. To our knowledge, this is the first cadaveric teaching model for demonstrating the movements of the extraocular muscles. Clin. Anat. 30:733-735, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. A Simple Method To Demonstrate the Enzymatic Production of Hydrogen from Sugar

    Science.gov (United States)

    Hershlag, Natalie; Hurley, Ian; Woodward, Jonathan

    1998-10-01

    There is current interest in and concern for the development of environmentally friendly bioprocesses whereby biomass and the biodegradable content of municipal wastes can be converted to useful forms of energy. For example, cellulose, a glucose polymer that is the principal component of biomass and paper waste, can be enzymatically degraded to glucose, which can subsequently be converted by fermentation or further enzymatic reaction to fuels such as ethanol or hydrogen. These products represent alternative energy sources to fossil fuels such as oil. Demonstration of the relevant reactions in high-school and undergraduate college laboratories would have value not only in illustrating environmentally friendly biotechnology for the utilization of renewable energy sources, such as cellulosic wastes, but could also be used to teach the principles of enzyme-catalyzed reactions. In the experimental protocol described here, it has been demonstrated that the common sugar glucose can be used to produce hydrogen using two enzymes, glucose dehydrogenase and hydrogenase. No sophisticated or expensive hydrogen detection equipment is required-only a redox dye, benzyl viologen, which turns purple when it is reduced. The color can be detected by a simple colorimeter. Furthermore, it is shown that the renewable resource cellulose, in its soluble derivative from carboxymethylcellulose, as well as aspen-wood waste, is also a source of hydrogen if the enzyme cellulase is included in the reaction mixture.

  19. Presentation on the Modeling and Educational Demonstrations Laboratory Curriculum Materials Center (MEDL-CMC): A Working Model and Progress Report

    Science.gov (United States)

    Glesener, G. B.; Vican, L.

    2015-12-01

    Physical analog models and demonstrations can be effective educational tools for helping instructors teach abstract concepts in the Earth, planetary, and space sciences. Reducing the learning challenges for students using physical analog models and demonstrations, however, can often increase instructors' workload and budget because the cost and time needed to produce and maintain such curriculum materials is substantial. First, this presentation describes a working model for the Modeling and Educational Demonstrations Laboratory Curriculum Materials Center (MEDL-CMC) to support instructors' use of physical analog models and demonstrations in the science classroom. The working model is based on a combination of instructional resource models developed by the Association of College & Research Libraries and by the Physics Instructional Resource Association. The MEDL-CMC aims to make the curriculum materials available for all science courses and outreach programs within the institution where the MEDL-CMC resides. The sustainability and value of the MEDL-CMC comes from its ability to provide and maintain a variety of physical analog models and demonstrations in a wide range of science disciplines. Second, the presentation then reports on the development, progress, and future of the MEDL-CMC at the University of California Los Angeles (UCLA). Development of the UCLA MEDL-CMC was funded by a grant from UCLA's Office of Instructional Development and is supported by the Department of Earth, Planetary, and Space Sciences. Other UCLA science departments have recently shown interest in the UCLA MEDL-CMC services, and therefore, preparations are currently underway to increase our capacity for providing interdepartmental service. The presentation concludes with recommendations and suggestions for other institutions that wish to start their own MEDL-CMC in order to increase educational effectiveness and decrease instructor workload. We welcome an interuniversity collaboration to

  20. Smart grid demonstrators and experiments in France: Economic assessments of smart grids. Challenges, methods, progress status and demonstrators; Contribution of 'smart grid' demonstrators to electricity transport and market architectures; Challenges and contributions of smart grid demonstrators to the distribution network. Focus on the integration of decentralised production; Challenges and contributions of smart grid demonstrators to the evolution of providing-related professions and to consumption practices

    International Nuclear Information System (INIS)

    Sudret, Thierry; Belhomme, Regine; Nekrassov, Andrei; Chartres, Sophie; Chiappini, Florent; Drouineau, Mathilde; Hadjsaid, Nouredine; Leonard, Cedric; Bena, Michel; Buhagiar, Thierry; Lemaitre, Christian; Janssen, Tanguy; Guedou, Benjamin; Viana, Maria Sebastian; Malarange, Gilles; Hadjsaid, Nouredine; Petit, Marc; Lehec, Guillaume; Jahn, Rafael; Gehain, Etienne

    2015-01-01

    This publication proposes a set of four articles which give an overview of challenges and contributions of smart grid demonstrators for the French electricity system according to different perspectives and different stakeholders. These articles present the first lessons learned from these demonstrators in terms of technical and technological innovations, of business and regulation models, and of customer behaviour and acceptance. More precisely, the authors discuss economic assessments of smart grids with an overview of challenges, methods, progress status and existing smart grid programs in the World, comment the importance of the introduction of intelligence at hardware, software and market level, highlight the challenges and contributions of smart grids for the integration of decentralised production, and discuss how smart grid demonstrators impact providing-related professions and customer consumption practices

  1. Research and Demonstration of‘Double-chain’Eco-agricultural Model Standardization and Industrialization

    Directory of Open Access Journals (Sweden)

    ZHANG Jia-hong

    2015-04-01

    Full Text Available According to agricultural resource endowment of Jiangsu Province, this paper created kinds of double-chain eco-agricultural model and integrated supporting system based on 'waterfowl, marine lives, aquatic vegetable and paddy rice', 'special food and economic crops with livestock’and‘special food and economic crops with livestock and marine lives’, which were suitable for extension and application in Jiangsu Province. Besides, it set 12 provincial standards and established preliminary technical standard system of‘double-chain’eco-agricultural model. In addition, it explored that‘the leading agricultural enterprises (agricultural co-operatives or family farms+demonstration zones+farmer households’was adopted as operating mechanism of industrialization of eco-agricultural model, which pushed forward rapid development of standardization and industrialization of‘double-chain’eco-agricultural model.

  2. Biases of chamber methods for measuring soil CO2 efflux demonstrated with a laboratory apparatus.

    Science.gov (United States)

    S. Mark Nay; Kim G. Mattson; Bernard T. Bormann

    1994-01-01

    Investigators have historically measured soil CO2 efflux as an indicator of soil microbial and root activity and more recently in calculations of carbon budgets. The most common methods estimate CO2 efflux by placing a chamber over the soil surface and quantifying the amount of CO2 entering the...

  3. To Demonstrate the Specificity of an Enzymatic Method for Plasma Paracetamol Estimation.

    Science.gov (United States)

    O'Mullane, John A.

    1987-01-01

    Describes an experiment designed to introduce biochemistry students to the specificity of an analytical method which uses an enzyme to quantitate its substrate. Includes the use of toxicity charts together with the concept of the biological half-life of a drug. (TW)

  4. Demonstrating the Effectiveness of an Integrated and Intensive Research Methods and Statistics Course Sequence

    Science.gov (United States)

    Pliske, Rebecca M.; Caldwell, Tracy L.; Calin-Jageman, Robert J.; Taylor-Ritzler, Tina

    2015-01-01

    We developed a two-semester series of intensive (six-contact hours per week) behavioral research methods courses with an integrated statistics curriculum. Our approach includes the use of team-based learning, authentic projects, and Excel and SPSS. We assessed the effectiveness of our approach by examining our students' content area scores on the…

  5. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  6. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1985-01-01

    This paper summarizes a demonstration uncertainty/sensitivity analysis performed on the reactor accident consequence model CRAC2. The study was performed with uncertainty/sensitivity analysis techniques compiled as part of the MELCOR program. The principal objectives of the study were: 1) to demonstrate the use of the uncertainty/sensitivity analysis techniques on a health and economic consequence model, 2) to test the computer models which implement the techniques, 3) to identify possible difficulties in performing such an analysis, and 4) to explore alternative means of analyzing, displaying, and describing the results. Demonstration of the applicability of the techniques was the motivation for performing this study; thus, the results should not be taken as a definitive uncertainty analysis of health and economic consequences. Nevertheless, significant insights on health and economic consequence analysis can be drawn from the results of this type of study. Latin hypercube sampling (LHS), a modified Monte Carlo technique, was used in this study. LHS generates a multivariate input structure in which all the variables of interest are varied simultaneously and desired correlations between variables are preserved. LHS has been shown to produce estimates of output distribution functions that are comparable with results of larger random samples

  7. Operational method for demonstrating fuel loading integrity in a reactor having accessible 235U fuel

    International Nuclear Information System (INIS)

    Ward, D.R.

    1979-07-01

    The Health Physics Research Reactor is a small pulse reactor at the Oak Ridge National Laboratory. It is desirable for the operator to be able to demonstrate on a routine basis that all the fuel pieces are present in the reactor core. Accordingly, a technique has been devised wherein the control rod readings are recorded with the reactor at delayed critical and corrections are made to compensate for the effects of variations in reactor height above the floor, reactor power, core temperature, and the presence of any massive neutron reflectors. The operator then compares these readings with the values expected based on previous operating experience. If this routine operational check suggests that the core fuel loading might be deficient, a more rigorous follow-up may be made

  8. The systems prioritization method (SPM) CD-ROM demonstration for Waste Management '96

    International Nuclear Information System (INIS)

    Harris, C.L.; Boak, D.M.; Prindle, N.H.; Beyeler, W.

    1996-01-01

    In March 1994, the Department of Energy Carlsbad Area Office (DOE/CAO) implemented a performance-based planning method to assist in prioritization within the Waste Isolation Pilot Plant (WIPP). Probabilistic performance calculations were required for the Systems Prioritization Method (SPM) and roughly 46,700 combinations of activities were analyzed, generating a large volume of information to be documented, analyzed, and communicated. A self-contained information management system consisting of a relational database on a 600-megabyte CD-ROM was built to meet this need. The CD-ROM was used to store performance assessment results, data analysis and visualization tools, information about the activities, electronic copies of 40 ILFR 191 and 40 CFR 268, technical reference papers, and the final SPM report. Copies of the CD-ROM were distributed to interested members of the public, WIPP participants, and the Environmental Protection Agency (EPA)

  9. Theory, Demonstration and Methods: Research on Social Security of Migrant Workers by Domestic Scholar

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Social security of migrant workers has been significant in dissolving social contradictions and achieving the economic and social development in China during the transitional period. The researches of domestic scholar on social security of migrant workers can be classified into three categories. Firstly, theoretical analysis on social security of migrant workers, including researches on the appeal of social security and misunderstanding of recognition, theory-construction of rural worker social security, policy defects and equity construction in social security system of migrant workers. Secondly, real studies on social security of migrant workers, including researches on sequence of demand and influencing factors of social security of migrant workers as well as intrinsic motivation forming the perspective on social security. Lastly, road exploration of establishing social security system, including researches on the multi-level development of rural worker social security system, comparison of "Double-low method", "Guangdong Method" and "Shanghai Method" of the social security of migrant workers in Zhejiang Province and establishing multi-level social security system according to the hierarchy after the internal differentiation.

  10. Mathematical modeling of the drying of extruded fish feed and its experimental demonstration

    DEFF Research Database (Denmark)

    Haubjerg, Anders Fjeldbo; Simonsen, B.; Løvgreen, S.

    This paper present a mathematical model for the drying of extruded fish feed pellets. The model relies on conservation balances for moisture and energy. Sorption isotherms from literature are used together with diffusion and transfer coefficients obtained from dual parameter regression analysis...... against experimental data. The lumped capacitance method for the estimation of the heat transfer coefficient is used. The model performs well at temperatures ± 5 °C from sorption isotherm specificity, and for different pellet sizes. There is a slight under-estimation of surface temperature of denser feed...

  11. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  12. Demonstration of a Sensitive Method to Measure Nuclear-Spin-Dependent Parity Violation

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus, and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. We demonstrate measurements of NSD-PV that use an enhancement of the effect in diatomic molecules, here using the test system 138Ba 19. Our sensitivity surpasses that of any previous atomic parity violation measurement. We show that systematic errors can be suppressed to at least the level of the present statistical sensitivity. We measure the matrix element W of the NSD-PV interaction with total uncertainty δ W /(2 π )<0.7 Hz , for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei including 137Ba in 137BaF, where |W |/(2 π )≈5 Hz is expected.

  13. Development of a Terrestrial Modeling System: The China-wide Demonstration

    Science.gov (United States)

    Duan, Q.; Dai, Y.; Zheng, X.; Ye, A.; Chen, Z.; Shangguang, W.

    2010-12-01

    A terrestrial modeling system (TMS) is being developed at Beijing Normal University. The purposes of TMS are (1) to provide a land surface parameterization scheme fully capable of being coupled with and climate and Earth system models of different scales; (2) to provide a standalone platform for simulation and prediction of land surface processes; and (3) to provide a platform for studying human-Earth system interactions. This system will build on and extend existing capabilities at BNU, including the Common Land Model (CoLM) system, high-resolution atmospheric forcing data sets, high-resolution soil and vegetation data sets, and high-performance computing facilities and software. This presentation describes the system design and demonstrates the initial capabilities of TMS in simulating water and energy fluxes over the continental China for a multi-year period.

  14. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  15. Demonstration of retrieval methods for Westinghouse Hanford Corporation October 20, 1995

    International Nuclear Information System (INIS)

    1996-10-01

    Westinghouse Hanford Corporation has been pursuing strategies to break up and retrieve the radioactive waste material in single shell storage tanks at the Hanford Nuclear Reservation, by working with non-radioactive ''saltcake'' and sludge material that simulate the actual waste. It has been suggested that the use of higher volumes of water than used in the past (10 gpm nozzles at 10,000 psi) might be successful in breaking down the hard waste simulants. Additionally, the application of these higher volumes of water might successfully be applied through commercially available tooling using methods similar to those used in the deslagging of large utility boilers. NMW Industrial Services, Inc., has proposed a trial consisting of three approaches each to dislodging both the solid (saltcake) simulant and the sludge simulant

  16. Demonstration of retrieval methods for Westinghouse Hanford Corporation October 20, 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    Westinghouse Hanford Corporation has been pursuing strategies to break up and retrieve the radioactive waste material in single shell storage tanks at the Hanford Nuclear Reservation, by working with non-radioactive ``saltcake`` and sludge material that simulate the actual waste. It has been suggested that the use of higher volumes of water than used in the past (10 gpm nozzles at 10,000 psi) might be successful in breaking down the hard waste simulants. Additionally, the application of these higher volumes of water might successfully be applied through commercially available tooling using methods similar to those used in the deslagging of large utility boilers. NMW Industrial Services, Inc., has proposed a trial consisting of three approaches each to dislodging both the solid (saltcake) simulant and the sludge simulant.

  17. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  18. Project to develop and demonstrate methods to eliminate frozen coal handling problems. Status report I

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-09-15

    For too many years, problems associated with frozen coal have plagued the companies who mine it, the companies who handle it in transit and the utilities and other industrial concerns that finally burn it. But never before has the magnitude of the frozen coal problem been as great as it is today because of two primary factors, i.e. (1) the majority of coal currently transported and used has been ground to a very fine mesh that absorbs water readily, thus providing more surface area for freezing, and (2) the substantially increased importance of coal, indeed, the now critical necessity for more coal to be used in displacing dangerously uncertain foreign oil supplies that currently account for 50 percent of our daily domestic oil consumption. Frozen coal problems can and do have a devastating effect upon the ability to provide energy from coal during harsh winter months when it is not needed. The majority of these problems have been involved with removing frozen coal from rail cars. To allay the problem, numerous techniques have been tried, all with some measure of success. As an example, certain chemicals have been sprayed on the coal; another common treatment has been widespread use of thaw sheds, which, whether electrically or gas-fired, are all energy intensive, time consuming, hard on rail equipment and expensive to operate over long periods of time. From sledge hammers and crow bars to gas-fired jets and electric thaw sheds, available mechanical de-icing methods often damage coal handling equipment, are time consuming and, therefore, very expensive when demurrage losses must be added to significant investment and/or operating costs.

  19. A mechanistic model for electricity consumption on dairy farms: definition, validation, and demonstration.

    Science.gov (United States)

    Upton, J; Murphy, M; Shalloo, L; Groot Koerkamp, P W G; De Boer, I J M

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical data of 1yr on commercial spring calving, grass-based dairy farms with 45, 88, and 195 milking cows; and (3) demonstrated the functionality of the model by applying 2 electricity tariffs to the electricity consumption data and examining the effect on total dairy farm electricity costs. The MECD was developed using a mechanistic modeling approach and required the key inputs of milk production, cow number, and details relating to the milk-cooling system, milking machine system, water-heating system, lighting systems, water pump systems, and the winter housing facilities as well as details relating to the management of the farm (e.g., season of calving). Model validation showed an overall relative prediction error (RPE) of less than 10% for total electricity consumption. More than 87% of the mean square prediction error of total electricity consumption was accounted for by random variation. The RPE values of the milk-cooling systems, water-heating systems, and milking machine systems were less than 20%. The RPE values for automatic scraper systems, lighting systems, and water pump systems varied from 18 to 113%, indicating a poor prediction for these metrics. However, automatic scrapers, lighting, and water pumps made up only 14% of total electricity consumption across all farms, reducing the overall impact of these poor predictions. Demonstration of the model showed that total farm electricity costs increased by between 29 and 38% by moving from a day and night tariff to a flat

  20. Modelling Plane Geometry: the connection between Geometrical Visualization and Algebraic Demonstration

    Science.gov (United States)

    Pereira, L. R.; Jardim, D. F.; da Silva, J. M.

    2017-12-01

    The teaching and learning of Mathematics contents have been challenging along the history of the education, both for the teacher, in his dedicated task of teaching, as for the student, in his arduous and constant task of learning. One of the topics that are most discussed in these contents is the difference between the concepts of proof and demonstration. This work presents an interesting discussion about such concepts considering the use of the mathematical modeling approach for teaching, applied to some examples developed in the classroom with a group of students enrolled in the discipline of Geometry of the Mathematics curse of UFVJM.

  1. Energy models: methods and trends

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)

    1997-12-31

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.

  2. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  3. Single-shot spiral imaging enabled by an expanded encoding model: Demonstration in diffusion MRI.

    Science.gov (United States)

    Wilm, Bertram J; Barmet, Christoph; Gross, Simon; Kasper, Lars; Vannesjo, S Johanna; Haeberlin, Max; Dietrich, Benjamin E; Brunner, David O; Schmid, Thomas; Pruessmann, Klaas P

    2017-01-01

    The purpose of this work was to improve the quality of single-shot spiral MRI and demonstrate its application for diffusion-weighted imaging. Image formation is based on an expanded encoding model that accounts for dynamic magnetic fields up to third order in space, nonuniform static B 0 , and coil sensitivity encoding. The encoding model is determined by B 0 mapping, sensitivity mapping, and concurrent field monitoring. Reconstruction is performed by iterative inversion of the expanded signal equations. Diffusion-tensor imaging with single-shot spiral readouts is performed in a phantom and in vivo, using a clinical 3T instrument. Image quality is assessed in terms of artefact levels, image congruence, and the influence of the different encoding factors. Using the full encoding model, diffusion-weighted single-shot spiral imaging of high quality is accomplished both in vitro and in vivo. Accounting for actual field dynamics, including higher orders, is found to be critical to suppress blurring, aliasing, and distortion. Enhanced image congruence permitted data fusion and diffusion tensor analysis without coregistration. Use of an expanded signal model largely overcomes the traditional vulnerability of spiral imaging with long readouts. It renders single-shot spirals competitive with echo-planar readouts and thus deploys shorter echo times and superior readout efficiency for diffusion imaging and further prospective applications. Magn Reson Med 77:83-91, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  5. Heterogeneity and contaminant transport modeling for the Savannah River integrated demonstration site

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1992-11-01

    The effectiveness of remediating aquifers and vadose zone sediments is frequently controlled by spatial heterogeneities. A continuing and long-recognized problem in selecting, planning, implementing, and operating remediation projects is the development of methods for quantitatively describing heterogeneity and predicting its effects on process performance. The similarity to and differences from modeling oil recovery processes in the petroleum industry are illustrated by the extension to contaminant extraction processes of an analytic model originally developed for waterflooding petroleum reservoirs. The resulting equations incorporate the effects of heterogeneity through a single parameter, σ. Fitting this model to the Savannah River in situ Air Stripping test data suggests that the injection of air into a horizontal well below the water table may have improved performance by changing the flow pattern in the vadose zone. This change increased the capture volume, and consequently the contaminant mass inventory, of the horizontal injection well completed in the vadose zone. The apparent increases (compared to extraction only from the horizontal well) are from 10,200 to 21,000 pounds for TCE and from 3,600 pounds to 59,800 pounds for PCE. The predominance of PCE in this calculated increase suggests that redistribution of flow paths in the vadose zone, rather than in-situ stripping, may provide most of the improvement. Although this preliminary conclusion remains to be reinforced by more sophisticated modeling currently in progress, there appears to be a definite improvement, which is attributable to air injection, over conventional remediation methods

  6. Methods for histochemical demonstration of vascular structures at the muscle-bone interface from cryostate sections of demineralized tissue

    DEFF Research Database (Denmark)

    Kirkeby, S

    1981-01-01

    In tissue decalcified with MgNa2EDTA at a neutral pH activity for ATPase can used be for demonstration of the vascular structures at the muscle-bone interface. The GOMORI method for alkaline phosphatase is only of value, when fresh unfixed tissue is to be examined. The azo-dye method for alkaline...... phosphatase failed to give satisfactory results, and so did the alpha-amylase PAS method. 5'-nucleotidase activity is present in both capillaries and in cells lining the surfaces of bones, while larger blood vessels are poorly stained....

  7. A High-Resolution Terrestrial Modeling System (TMS): A Demonstration in China

    Science.gov (United States)

    Duan, Q.; Dai, Y.; Zheng, X.; Ye, A.; Ji, D.; Chen, Z.

    2013-12-01

    This presentation describes a terrestrial modeling system (TMS) developed at Beijing Normal University. The TMS is designed to be driven by multi-sensor meteorological and land surface observations, including those from satellites and land based observing stations. The purposes of the TMS are (1) to provide a land surface parameterization scheme fully capable of being coupled with the Earth system models; (2) to provide a standalone platform for retrospective historical simulation and for forecasting of future land surface processes at different space and time scales; and (3) to provide a platform for studying human-Earth system interactions and for understanding climate change impacts. This system is built on capabilities among several groups at BNU, including the Common Land Model (CoLM) system, high-resolution atmospheric forcing data sets, high resolution land surface characteristics data sets, data assimilation and uncertainty analysis platforms, ensemble prediction platform, and high-performance computing facilities. This presentation intends to describe the system design and demonstrate the capabilities of TMS with results from a China-wide application.

  8. Near-point string: Simple method to demonstrate anticipated near point for multifocal and accommodating intraocular lenses.

    Science.gov (United States)

    George, Monica C; Lazer, Zane P; George, David S

    2016-05-01

    We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  9. DEMONSTRATION COMPUTER MODELS USE WHILE SOLVING THE BUILDING OF THE CUT OF THE CYLINDER

    Directory of Open Access Journals (Sweden)

    Inna O. Gulivata

    2010-10-01

    Full Text Available Relevance of material presented in the article is the use of effective methods to illustrate the geometric material for the development of spatial imagination of students. As one of the ways to improve problem solving offer to illustrate the use of display computer model (DCM investigated objects created by the software environment PowerPoint. The technique of applying DCM while solving the problems to build a section of the cylinder makes it allows to build effective learning process and promotes the formation of spatial representations of students taking into account their individual characteristics and principles of differentiated instruction.

  10. Experiment and Modeling of ITER Demonstration Discharges in the DIII-D Tokamak

    International Nuclear Information System (INIS)

    Park, Jin Myung; Doyle, E. J.; Ferron, J.R.; Holcomb, C.T.; Jackson, G.L.; Lao, L.L.; Luce, T.C.; Owen, Larry W.; Murakami, Masanori; Osborne, T.H.; Politzer, P.A.; Prater, R.; Snyder, P.B.

    2011-01-01

    DIII-D is providing experimental evaluation of 4 leading ITER operational scenarios: the baseline scenario in ELMing H-mode, the advanced inductive scenario, the hybrid scenario, and the steady state scenario. The anticipated ITER shape, aspect ratio and value of I/αB were reproduced, with the size reduced by a factor of 3.7, while matching key performance targets for β N and H 98 . Since 2008, substantial experimental progress was made to improve the match to other expected ITER parameters for the baseline scenario. A lower density baseline discharge was developed with improved stationarity and density control to match the expected ITER edge pedestal collisionality (ν* e ∼ 0.1). Target values for β N and H 98 were maintained at lower collisionality (lower density) operation without loss in fusion performance but with significant change in ELM characteristics. The effects of lower plasma rotation were investigated by adding counter-neutral beam power, resulting in only a modest reduction in confinement. Robust preemptive stabilization of 2/1 NTMs was demonstrated for the first time using ECCD under ITER-like conditions. Data from these experiments were used extensively to test and develop theory and modeling for realistic ITER projection and for further development of its optimum scenarios in DIII-D. Theory-based modeling of core transport (TGLF) with an edge pedestal boundary condition provided by the EPED1 model reproduces T e and T i profiles reasonably well for the 4 ITER scenarios developed in DIII-D. Modeling of the baseline scenario for low and high rotation discharges indicates that a modest performance increase of ∼ 15% is needed to compensate for the expected lower rotation of ITER. Modeling of the steady-state scenario reproduces a strong dependence of confinement, stability, and noninductive fraction (f NI ) on q 95 , as found in the experimental I p scan, indicating that optimization of the q profile is critical to simultaneously achieving the

  11. Competitive exclusion: an ecological model demonstrates how research metrics can drive women out of science

    Science.gov (United States)

    O'Brien, K.; Hapgood, K.

    2012-12-01

    While universities are often perceived within the wider population as a flexible family-friendly work environment, continuous full-time employment remains the norm in tenure track roles. This traditional career path is strongly re-inforced by research metrics, which typically measure accumulated historical performance. There is a strong feedback between historical and future research output, and there is a minimum threshold of research output below which it becomes very difficult to attract funding, high quality students and collaborators. The competing timescales of female fertility and establishment of a research career mean that many women do not exceed this threshold before having children. Using a mathematical model taken from an ecological analogy, we demonstrate how these mechanisms create substantial barriers to pursuing a research career while working part-time or returning from extended parental leave. The model highlights a conundrum for research managers: metrics can promote research productivity and excellence within an organisation, but can classify highly capable scientists as poor performers simply because they have not followed the traditional career path of continuous full-time employment. Based on this analysis, we make concrete recommendations for researchers and managers seeking to retain the skills and training invested in female scientists. We also provide survival tactics for women and men who wish to pursue a career in science while also spending substantial time and energy raising their family.

  12. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 1. Model Formulation and Demonstration

    Science.gov (United States)

    1978-05-01

    The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...

  13. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  14. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  15. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  16. Development of a Pharmacoeconomic Model to Demonstrate the Effect of Clinical Pharmacist Involvement in Diabetes Management.

    Science.gov (United States)

    Ourth, Heather; Nelson, Jordan; Spoutz, Patrick; Morreale, Anthony P

    2018-05-01

    A data collection tool was developed and nationally deployed to clinical pharmacists (CPs) working in advanced practice provider roles within the Department of Veterans Affairs to document interventions and associated clinical outcomes. Intervention and short-term clinical outcome data derived from the tool were used to populate a validated clinical outcomes modeling program to predict long-term clinical and economic effects. To predict the long-term effect of CP-provided pharmacotherapy management on outcomes and costs for patients with type 2 diabetes. Baseline patient demographics and biomarkers were extracted for type 2 diabetic patients having > 1 encounter with a CP using the tool between January 5, 2013, and November 20, 2014. Treatment biomarker values were extracted 12 months after the patient's initial visit with the CP. The number of visits with the CP was extracted from the electronic medical record, and duration of visit time was quantified by Current Procedural Terminology codes. Simulation modeling was performed on 3 patient cohorts-those with a baseline hemoglobin A1c of 8% to < 9%, 9% to < 10%, and ≥ 10%-to estimate long-term cost and clinical outcomes using modeling based on pivotal trial data (the Archimedes Model). A sensitivity analysis was conducted to assess the extent to which our results were dependent on assumptions related to program effectiveness and costs. A total of 7,310 patients were included in the analysis. Analysis of costs and events on 2-, 3-, 5-, and 10-year time horizons demonstrated significant reductions in major adverse cardiovascular events (MACEs), myocardial infarctions (MIs), episodes of acute heart failure, foot ulcers, and foot amputations in comparison with a control group receiving usual guideline-directed medical care. In the cohort with a baseline A1c of ≥ 10%, the absolute risk reduction was 1.82% for MACE, 1.73% for MI, 2.43% for acute heart failure, 5.38% for foot ulcers, and 2.03% for foot amputations. The

  17. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1984-12-01

    The techniques for performing uncertainty/sensitivity analyses compiled as part of the MELCOR program appear to be well suited for use with a health and economic consequence model. Two replicate samples of size 50 gave essentially identical results, indicating that for this case, a Latin hypercube sample of size 50 seems adequate to represent the distribution of results. Though the intent of this study was a demonstration of uncertainty/sensitivity analysis techniques, a number of insights relevant to health and economic consequence modeling can be gleaned: uncertainties in early deaths are significantly greater than uncertainties in latent cancer deaths; though the magnitude of the source term is the largest source of variation in estimated distributions of early deaths, a number of additional parameters are also important; even with the release fractions for a full SST1, one quarter of the CRAC2 runs gave no early deaths; and comparison of the estimates of mean early deaths for a full SST1 release in this study with those of recent point estimates for similar conditions indicates that the recent estimates may be significant overestimations of early deaths. Estimates of latent cancer deaths, however, are roughly comparable. An analysis of the type described here can provide insights in a number of areas. First, the variability in the results gives an indication of the potential uncertainty associated with the calculations. Second, the sensitivity of the results to assumptions about the input variables can be determined. Research efforts can then be concentrated on reducing the uncertainty in the variables which are the largest contributors to uncertainty in results

  18. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Describes two demonstrations to illustrate characteristics of substances. Outlines a method to detect the changes in pH levels during the electrolysis of water. Uses water pistols, one filled with methane gas and the other filled with water, to illustrate the differences in these two substances. (TW)

  19. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  20. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  1. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  2. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  3. Utility of a mouse model of osteoarthritis to demonstrate cartilage protection by IFNγ-primed equine mesenchymal stem cells

    Directory of Open Access Journals (Sweden)

    Marie Maumus

    2016-09-01

    Full Text Available Objective. Mesenchymal stem cells isolated from adipose tissue (ASC have been shown to influence the course of osteoarthritis (OA in different animal models and are promising in veterinary medicine for horses involved in competitive sport. The aim of this study was to characterize equine ASCs (eASC and investigate the role of interferon-gamma (IFNγ-priming on their therapeutic effect in a murine model of OA, which could be relevant to equine OA.Methods. ASC were isolated from subcutaneous fat. Expression of specific markers was tested by cytometry and RT-qPCR. Differentiation potential was evaluated by histology and RT-qPCR. For functional assays, naïve or IFNγ-primed eASCs were cocultured with PBMC or articular cartilage explants. Finally, the therapeutic effect of eASCs was tested in the model of collagenase-induced OA in mice (CIOA.Results. The immunosuppressive function of eASCs on equine T cell proliferation and their chondroprotective effect on equine cartilage explants were demonstrated in vitro. Both cartilage degradation and T cell activation were reduced by naïve and IFNγ-primed eASCs but IFNγ-priming enhanced these functions. In CIOA, intra-articular injection of eASCs prevented articular cartilage from degradation and IFNγ-primed eASCs were more potent than naïve cells. This effect was related to the modulation of eASC secretome by IFNγ-priming.Conclusion. IFNγ-priming of eASCs potentiated their antiproliferative and chondroprotective functions. We demonstrated that the immunocompetent mouse model of CIOA was relevant to test the therapeutic efficacy of xenogeneic eASCs for OA and confirmed that IFNγ-primed eASCs may have a therapeutic value for musculoskeletal diseases in veterinary medicine.

  4. Explicit demonstration of the convergence of the close-coupling method for a Coulomb three-body problem

    International Nuclear Information System (INIS)

    Bray, I.; Stelbovics, A.T.

    1992-01-01

    Convergence as a function of the number of states is studied and demonstrated for the Poet-Temkin model of electron-hydrogen scattering. In this Coulomb three-body problem only the l=0 partial waves are treated. By taking as many as thirty target states, obtained by diagonalizing the target Hamiltonian in a Laguerre basis, complete agreement with the smooth results of Poet is obtained at all energies. We show that the often-encountered pseudoresonance features in the cross sections are simply an indication of an inadequate target state representation

  5. Methods for demonstration of enzyme activity in muscle fibres at the muscle/bone interface in demineralized tissue

    DEFF Research Database (Denmark)

    Kirkeby, S; Vilmann, H

    1981-01-01

    A method for demonstration of activity for ATPase and various oxidative enzymes (succinic dehydrogenase, alpha-glycerophosphate dehydrogenase, and lactic dehydrogenase) in muscle/bone sections of fixed and demineralized tissue has been developed. It was found that it is possible to preserve...... considerable amounts of the above mentioned enzymes in the muscle fibres at the muscle/bone interfaces. The best results were obtained after 20 min fixation, and 2-3 weeks of storage in MgNa2EDTA containing media. As the same technique previously has been used to describe patterns of resorption and deposition...

  6. A volcanic event forecasting model for multiple tephra records, demonstrated on Mt. Taranaki, New Zealand

    Science.gov (United States)

    Damaschke, Magret; Cronin, Shane J.; Bebbington, Mark S.

    2018-01-01

    Robust time-varying volcanic hazard assessments are difficult to develop, because they depend upon having a complete and extensive eruptive activity record. Missing events in eruption records are endemic, due to poor preservation or erosion of tephra and other volcanic deposits. Even with many stratigraphic studies, underestimation or overestimation of eruption numbers is possible due to mis-matching tephras with similar chemical compositions or problematic age models. It is also common to have gaps in event coverage due to sedimentary records not being available in all directions from the volcano, especially downwind. Here, we examine the sensitivity of probabilistic hazard estimates using a suite of four new and two existing high-resolution tephra records located around Mt. Taranaki, New Zealand. Previous estimates were made using only single, or two correlated, tephra records. In this study, tephra data from six individual sites in lake and peat bogs covering an arc of 120° downwind of the volcano provided an excellent temporal high-resolution event record. The new data confirm a previously identified semi-regular pattern of variable eruption frequency at Mt. Taranaki. Eruption intervals exhibit a bimodal distribution, with eruptions being an average of 65 years apart, and in 2% of cases, centuries separate eruptions. The long intervals are less common than seen in earlier studies, but they have not disappeared with the inclusion of our comprehensive new dataset. Hence, the latest long interval of quiescence, since AD 1800, is unusual, but not out of character with the volcano. The new data also suggest that one of the tephra records (Lake Rotokare) used in earlier work had an old carbon effect on age determinations. This shifted ages of the affected tephras so that they were not correlated to other sites, leading to an artificially high eruption frequency in the previous combined record. New modelled time-varying frequency estimates suggest a 33

  7. A demonstration of the improved efficiency of the canonical coordinates method using nonlinear combined heat and power economic dispatch problems

    Science.gov (United States)

    Chang, Hung-Chieh; Lin, Pei-Chun

    2014-02-01

    Economic dispatch is the short-term determination of the optimal output from a number of electricity generation facilities to meet the system load while providing power. As such, it represents one of the main optimization problems in the operation of electrical power systems. This article presents techniques to substantially improve the efficiency of the canonical coordinates method (CCM) algorithm when applied to nonlinear combined heat and power economic dispatch (CHPED) problems. The improvement is to eliminate the need to solve a system of nonlinear differential equations, which appears in the line search process in the CCM algorithm. The modified algorithm was tested and the analytical solution was verified using nonlinear CHPED optimization problems, thereby demonstrating the effectiveness of the algorithm. The CCM methods proved numerically stable and, in the case of nonlinear programs, produced solutions with unprecedented accuracy within a reasonable time.

  8. The time has come for new models in febrile neutropenia: a practical demonstration of the inadequacy of the MASCC score.

    Science.gov (United States)

    Carmona-Bayonas, A; Jiménez-Fonseca, P; Virizuela Echaburu, J; Sánchez Cánovas, M; Ayala de la Peña, F

    2017-09-01

    Since its publication more than 15 years ago, the MASCC score has been internationally validated any number of times and recommended by most clinical practice guidelines for the management of febrile neutropenia (FN) around the world. We have used an empirical data-supported simulated scenario to demonstrate that, despite everything, the MASCC score is impractical as a basis for decision-making. A detailed analysis of reasons supporting the clinical irrelevance of this model is performed. First, seven of its eight variables are "innocent bystanders" that contribute little to selecting low-risk candidates for ambulatory management. Secondly, the training series was hardly representative of outpatients with solid tumors and low-risk FN. Finally, the simultaneous inclusion of key variables both in the model and in the outcome explains its successful validation in various series of patients. Alternative methods of prognostic classification, such as the Clinical Index of Stable Febrile Neutropenia, have been specifically validated for patients with solid tumors and should replace the MASCC model in situations of clinical uncertainty.

  9. The demonstration of nonlinear analytic model for the strain field induced by thermal copper filled TSVs (through silicon via

    Directory of Open Access Journals (Sweden)

    M. H. Liao

    2013-08-01

    Full Text Available The thermo-elastic strain is induced by through silicon vias (TSV due to the difference of thermal expansion coefficients between the copper (∼18 ppm/ °C and silicon (∼2.8 ppm/ °C when the structure is exposed to a thermal ramp budget in the three dimensional integrated circuit (3DIC process. These thermal expansion stresses are high enough to introduce the delamination on the interfaces between the copper, silicon, and isolated dielectric. A compact analytic model for the strain field induced by different layouts of thermal copper filled TSVs with the linear superposition principle is found to have large errors due to the strong stress interaction between TSVs. In this work, a nonlinear stress analytic model with different TSV layouts is demonstrated by the finite element method and the analysis of the Mohr's circle. The characteristics of stress are also measured by the atomic force microscope-raman technique with nanometer level space resolution. The change of the electron mobility with the consideration of this nonlinear stress model for the strong interactions between TSVs is ∼2–6% smaller in comparison with those from the consideration of the linear stress superposition principle only.

  10. Coherence method of identifying signal noise model

    International Nuclear Information System (INIS)

    Vavrin, J.

    1981-01-01

    The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)

  11. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  12. Engineering Model Propellant Feed System Development for an Iodine Hall Thruster Demonstration Mission

    Science.gov (United States)

    Polzin, Kurt A.

    2016-01-01

    CUBESATS are relatively new spacecraft platforms that are typically deployed from a launch vehicle as a secondary payload, providing low-cost access to space for a wide range of end-users. These satellites are comprised of building blocks having dimensions of 10x10x10 cu cm and a mass of 1.33 kg (a 1-U size). While providing low-cost access to space, a major operational limitation is the lack of a propulsion system that can fit within a CubeSat and is capable of executing high (Delta)v maneuvers. This makes it difficult to use CubeSats on missions requiring certain types of maneuvers (i.e. formation flying, spacecraft rendezvous). Recently, work has been performed investigating the use of iodine as a propellant for Hall-effect thrusters (HETs) 2 that could subsequently be used to provide a high specific impulse path to CubeSat propulsion. 3, 4 Iodine stores as a dense solid at very low pressures, making it acceptable as a propellant on a secondary payload. It has exceptionally high ?Isp (density times specific impulse), making it an enabling technology for small satellite near-term applications and providing the potential for systems-level advantages over mid-term high power electric propulsion options. Iodine flow can also be thermally regulated, subliming at relatively low temperature (engineering model propellant feed system for iSAT (see Fig. 1). The feed system is based around an iodine propellant reservoir and two proportional control valves (PFCVs) that meter the iodine flow to the cathode and anode. The flow is split upstream of the PFCVs to both components can be fed from a common reservoir. Testing of the reservoir is reported to demonstrate that the design is capable of delivering the required propellant flow rates to operate the thruster. The tubing and reservoir are fabricated from hastelloy to resist corrosion by the heated gaseous iodine propellant. The reservoir, tubing, and PFCVs are heated to ensure the sublimed propellant will not re

  13. 75 FR 14582 - Office of Special Education and Rehabilitative Services-Special Demonstration Programs-Model...

    Science.gov (United States)

    2010-03-26

    ... DEPARTMENT OF EDUCATION Office of Special Education and Rehabilitative Services--Special... of Special Education and Rehabilitative Services, Department of Education. ACTION: Notice of proposed... for Special Education and Rehabilitative Services proposes a priority under the Special Demonstration...

  14. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  15. Design of demand side response model in energy internet demonstration park

    Science.gov (United States)

    Zhang, Q.; Liu, D. N.

    2017-08-01

    The implementation of demand side response can bring a lot of benefits to the power system, users and society, but there are still many problems in the actual operation. Firstly, this paper analyses the current situation and problems of demand side response. On this basis, this paper analyses the advantages of implementing demand side response in the energy Internet demonstration park. Finally, the paper designs three kinds of feasible demand side response modes in the energy Internet demonstration park.

  16. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  17. Polymeric Materials Models in the Warrior Injury Assessment Manikin (WIAMan) Anthropomorphic Test Device (ATD) Tech Demonstrator

    Science.gov (United States)

    2017-01-01

    analytical model currently used by military vehicle analysts has been continuously updated to address the model’s inherent deficiencies and make the... model is a hyperelastic polymer model based upon statistical mechanics and the finite extensibility of a polymer chain.23 Its rheological ...ARL-TR-7927 ● JAN 2017 US Army Research Laboratory Polymeric Materials Models in the Warrior Injury Assessment Manikin (WIAMan

  18. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced

  19. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  20. X-231B technology demonstration for in situ treatment of contaminated soil: Contaminant characterization and three dimensional spatial modeling

    International Nuclear Information System (INIS)

    West, O.R.; Siegrist, R.L.; Mitchell, T.J.; Pickering, D.A.; Muhr, C.A.; Greene, D.W.; Jenkins, R.A.

    1993-11-01

    Fine-textured soils and sediments contaminated by trichloroethylene (TCE) and other chlorinated organics present a serious environmental restoration challenge at US Department of Energy (DOE) sites. DOE and Martin Marietta Energy Systems, Inc. initiated a research and demonstration project at Oak Ridge National Laboratory. The goal of the project was to demonstrate a process for closure and environmental restoration of the X-231B Solid Waste Management Unit at the DOE Portsmouth Gaseous Diffusion Plant. The X-231B Unit was used from 1976 to 1983 as a land disposal site for waste oils and solvents. Silt and clay deposits beneath the unit were contaminated with volatile organic compounds and low levels of radioactive substances. The shallow groundwater was also contaminated, and some contaminants were at levels well above drinking water standards. This document begins with a summary of the subsurface physical and contaminant characteristics obtained from investigative studies conducted at the X-231B Unit prior to January 1992 (Sect. 2). This is then followed by a description of the sample collection and analysis methods used during the baseline sampling conducted in January 1992 (Sect. 3). The results of this sampling event were used to develop spatial models for VOC contaminant distribution within the X-231B Unit

  1. A reliable and controllable graphene doping method compatible with current CMOS technology and the demonstration of its device applications

    Science.gov (United States)

    Kim, Seonyeong; Shin, Somyeong; Kim, Taekwang; Du, Hyewon; Song, Minho; Kim, Ki Soo; Cho, Seungmin; Lee, Sang Wook; Seo, Sunae

    2017-04-01

    The modulation of charge carrier concentration allows us to tune the Fermi level (E F) of graphene thanks to the low electronic density of states near the E F. The introduced metal oxide thin films as well as the modified transfer process can elaborately maneuver the amounts of charge carrier concentration in graphene. The self-encapsulation provides a solution to overcome the stability issues of metal oxide hole dopants. We have manipulated systematic graphene p-n junction structures for electronic or photonic application-compatible doping methods with current semiconducting process technology. We have demonstrated the anticipated transport properties on the designed heterojunction devices with non-destructive doping methods. This mitigates the device architecture limitation imposed in previously known doping methods. Furthermore, we employed E F-modulated graphene source/drain (S/D) electrodes in a low dimensional transition metal dichalcogenide field effect transistor (TMDFET). We have succeeded in fulfilling n-type, ambipolar, or p-type field effect transistors (FETs) by moving around only the graphene work function. Besides, the graphene/transition metal dichalcogenide (TMD) junction in either both p- and n-type transistor reveals linear voltage dependence with the enhanced contact resistance. We accomplished the complete conversion of p-/n-channel transistors with S/D tunable electrodes. The E F modulation using metal oxide facilitates graphene to access state-of-the-art complimentary-metal-oxide-semiconductor (CMOS) technology.

  2. Modeling U-Shaped Exposure-Response Relationships for Agents that Demonstrate Toxicity Due to Both Excess and Deficiency.

    Science.gov (United States)

    Milton, Brittany; Farrell, Patrick J; Birkett, Nicholas; Krewski, Daniel

    2017-02-01

    Essential elements such as copper and manganese may demonstrate U-shaped exposure-response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure-response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U-shaped exposure-response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed-form expression for the point at which the exposure-response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure-response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U-shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest. © 2016 Society for Risk Analysis.

  3. Review of various dynamic modeling methods and development of an intuitive modeling method for dynamic systems

    International Nuclear Information System (INIS)

    Shin, Seung Ki; Seong, Poong Hyun

    2008-01-01

    Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables

  4. Demonstration of Advanced EMI Models for Live-Site UXO Discrimination at Waikoloa, Hawaii

    Science.gov (United States)

    2015-12-01

    SITE UXO DISCRIMINATION AT WAIKOLOA, HAWAII 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dr. Fridon Shubitidze Thayer...UXO demonstration study at the former Waikoloa Maneuver Area (WMA) in Waikoloa, Hawaii , under ESTCP Munitions Response Project MR-201227. 15

  5. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  6. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  7. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  8. A survey of real face modeling methods

    Science.gov (United States)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  9. A Simple Model to Demonstrate the Balance of Forces at Functional Residual Capacity

    Science.gov (United States)

    Kanthakumar, Praghalathan; Oommen, Vinay

    2012-01-01

    Numerous models have been constructed to aid teaching respiratory mechanics. A simple model using a syringe and a water-filled bottle has been described by Thomas Sherman to explain inspiration and expiration. The elastic recoil of the chest wall and lungs has been described using a coat hanger or by using rods and rubber bands. A more complex…

  10. A Functional Model of the Digital Extensor Mechanism: Demonstrating Biomechanics with Hair Bands

    Science.gov (United States)

    Cloud, Beth A.; Youdas, James W.; Hellyer, Nathan J.; Krause, David A.

    2010-01-01

    The action of muscles about joints can be explained through analysis of their spatial relationship. A functional model of these relationships can be valuable in learning and understanding the muscular action about a joint. A model can be particularly helpful when examining complex actions across multiple joints such as in the digital extensor…

  11. Landscape-based population viability models demonstrate importance of strategic conservation planning for birds

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh; D. Todd. Jones-Farland

    2013-01-01

    Efforts to conserve regional biodiversity in the face of global climate change, habitat loss and fragmentation will depend on approaches that consider population processes at multiple scales. By combining habitat and demographic modeling, landscape-based population viability models effectively relate small-scale habitat and landscape patterns to regional population...

  12. An investigation into electromagnetic force models: differences in global and local effects demonstrated by selected problems

    Science.gov (United States)

    Reich, Felix A.; Rickert, Wilhelm; Müller, Wolfgang H.

    2018-03-01

    This study investigates the implications of various electromagnetic force models in macroscopic situations. There is an ongoing academic discussion which model is "correct," i.e., generally applicable. Often, gedankenexperiments with light waves or photons are used in order to motivate certain models. In this work, three problems with bodies at the macroscopic scale are used for computing theoretical model-dependent predictions. Two aspects are considered, total forces between bodies and local deformations. By comparing with experimental data, insight is gained regarding the applicability of the models. First, the total force between two cylindrical magnets is computed. Then a spherical magnetostriction problem is considered to show different deformation predictions. As a third example focusing on local deformations, a droplet of silicone oil in castor oil is considered, placed in a homogeneous electric field. By using experimental data, some conclusions are drawn and further work is motivated.

  13. Human In Silico Drug Trials Demonstrate Higher Accuracy than Animal Models in Predicting Clinical Pro-Arrhythmic Cardiotoxicity

    Directory of Open Access Journals (Sweden)

    Elisa Passini

    2017-09-01

    Full Text Available Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC50/Hill coefficient. Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca2+-transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs. Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca2+/late Na+ currents and Na+/Ca2+-exchanger, reduced Na+/K+-pump are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density

  14. Human In Silico Drug Trials Demonstrate Higher Accuracy than Animal Models in Predicting Clinical Pro-Arrhythmic Cardiotoxicity.

    Science.gov (United States)

    Passini, Elisa; Britton, Oliver J; Lu, Hua Rong; Rohrbacher, Jutta; Hermans, An N; Gallacher, David J; Greig, Robert J H; Bueno-Orovio, Alfonso; Rodriguez, Blanca

    2017-01-01

    Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP) models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC 50 /Hill coefficient). Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca 2+ -transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca 2+ /late Na + currents and Na + /Ca 2+ -exchanger, reduced Na + /K + -pump) are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density

  15. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  16. Demonstration of Linked UAV Observations and Atmospheric Model Predictions in Chem/Bio Attack Response

    National Research Council Canada - National Science Library

    Davidson, Kenneth

    2003-01-01

    ... meteorological data, and the means for linking the UAV data to real-time dispersion prediction. The primary modeling effort focused on an adaptation of the 'Wind On Constant Streamline Surfaces...

  17. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  18. Modelled female sale options demonstrate improved profitability in northern beef herds.

    Science.gov (United States)

    Niethe, G E; Holmes, W E

    2008-12-01

    To examine the impact of improving the average value of cows sold, the risk of decreasing the number weaned, and total sales on the profitability of northern Australian cattle breeding properties. Gather, model and interpret breeder herd performances and production parameters on properties from six beef-producing regions in northern Australia. Production parameters, prices, costs and herd structure were entered into a herd simulation model for six northern Australian breeding properties that spay females to enhance their marketing options. After the data were validated by management, alternative management strategies were modelled using current market prices and most likely herd outcomes. The model predicted a close relationship between the average sale value of cows, the total herd sales and the gross margin/adult equivalent. Keeping breeders out of the herd to fatten generally improves their sale value, and this can be cost-effective, despite the lower number of progeny produced and the subsequent reduction in total herd sales. Furthermore, if the price of culled cows exceeds the price of culled heifers, provided there are sufficient replacement pregnant heifers available to maintain the breeder herd nucleus, substantial gains in profitability can be obtained by decreasing the age at which cows are culled from the herd. Generalised recommendations on improving reproductive performance are not necessarily the most cost-effective strategy to improve breeder herd profitability. Judicious use of simulation models is essential to help develop the best turnoff strategies for females and to improve station profitability.

  19. Solar wind stream evolution at large heliocentric distances - Experimental demonstration and the test of a model

    Science.gov (United States)

    Gosling, J. T.; Hundhausen, A. J.; Bame, S. J.

    1976-01-01

    A stream propagation model which neglects all dissipation effects except those occurring at shock interfaces, was used to compare Pioneer-10 solar wind speed observations, during the time when Pioneer 10, the earth, and the sun were coaligned, with near-earth Imp-7 observations of the solar wind structure, and with the theoretical predictions of the solar wind structure at Pioneer 10 derived from the Imp-7 measurements, using the model. The comparison provides a graphic illustration of the phenomenon of stream steepening in the solar wind with the attendant formation of forward-reverse shock pairs and the gradual decay of stream amplitudes with increasing heliocentric distance. The comparison also provides a qualitative test of the stream propagation model.

  20. Applicability and perspectives of natural analogues as ''demonstration'' of PAGIS models

    International Nuclear Information System (INIS)

    Girardi, F.; D'Alessandro, M.

    1989-01-01

    In PAGIS Project the safety of the geological disposal system is based on the multibarrier concept, which is reflected in the calculation approach where for all options the behaviour of each barrier is modeled. In the present scheme all the models used for the performance assessment of the disposal options have been considered as a chain of codes describing the behaviour of the different barriers. For each of these, one or more possibilities of verification by Natural Analogue is presented. A set of Tables has been prepared which shows the sequence of phenomena considered for each disposal option. A review of the N.A. as far studied or simply recognized allowed a check to be made on the possibility of verification of the barrier models with the ''long term experiments'' offered by the geological evidence

  1. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  2. A Laboratory Exercise Using a Physical Model for Demonstrating Countercurrent Heat Exchange

    Science.gov (United States)

    Loudon, Catherine; Davis-Berg, Elizabeth C.; Botz, Jason T.

    2012-01-01

    A physical model was used in a laboratory exercise to teach students about countercurrent exchange mechanisms. Countercurrent exchange is the transport of heat or chemicals between fluids moving in opposite directions separated by a permeable barrier (such as blood within adjacent blood vessels flowing in opposite directions). Greater exchange of…

  3. Underwater wireless optical communications: From system-level demonstrations to channel modelling

    KAUST Repository

    Oubei, Hassan M.

    2018-01-09

    In this paper, we discuss about recent experimental advances in underwater wireless optical communications (UWOC) over various underwater channel water types using different modulation schemes as well as modelling and describing the statistical properties of turbulence-induced fading in underwater wireless optical channels using laser beam intensity fluctuations measurements.

  4. Underestimation of nuclear fuel burnup – theory, demonstration and solution in numerical models

    Directory of Open Access Journals (Sweden)

    Gajda Paweł

    2016-01-01

    Full Text Available Monte Carlo methodology provides reference statistical solution of neutron transport criticality problems of nuclear systems. Estimated reaction rates can be applied as an input to Bateman equations that govern isotopic evolution of reactor materials. Because statistical solution of Boltzmann equation is computationally expensive, it is in practice applied to time steps of limited length. In this paper we show that simple staircase step model leads to underprediction of numerical fuel burnup (Fissions per Initial Metal Atom – FIMA. Theoretical considerations indicates that this error is inversely proportional to the length of the time step and origins from the variation of heating per source neutron. The bias can be diminished by application of predictor-corrector step model. A set of burnup simulations with various step length and coupling schemes has been performed. SERPENT code version 1.17 has been applied to the model of a typical fuel assembly from Pressurized Water Reactor. In reference case FIMA reaches 6.24% that is equivalent to about 60 GWD/tHM of industrial burnup. The discrepancies up to 1% have been observed depending on time step model and theoretical predictions are consistent with numerical results. Conclusions presented in this paper are important for research and development concerning nuclear fuel cycle also in the context of Gen4 systems.

  5. The Development and Demonstration of Multiple Regression Models for Operant Conditioning Questions.

    Science.gov (United States)

    Fanning, Fred; Newman, Isadore

    Based on the assumption that inferential statistics can make the operant conditioner more sensitive to possible significant relationships, regressions models were developed to test the statistical significance between slopes and Y intercepts of the experimental and control group subjects. These results were then compared to the traditional operant…

  6. A Three Dimension Model to Demonstrate Head and Tail Fold Formation in Mammalian Embryos

    Science.gov (United States)

    Bressler, Robert S.

    1977-01-01

    Many students have difficulty visualizing the delineation of the embryonic body from the flat germ disc. An easily-constructed model is described that has been used successfully to convey the dynamics of embryological events at Mount Sinai School of Medicine. (LBH)

  7. Mixing Interviews and Rasch Modeling: Demonstrating a Procedure Used to Develop an Instrument That Measures Trust

    Science.gov (United States)

    David, Shannon L.; Hitchcock, John H.; Ragan, Brian; Brooks, Gordon; Starkey, Chad

    2018-01-01

    Developing psychometrically sound instruments can be difficult, especially if little is known about the constructs of interest. When constructs of interest are unclear, a mixed methods approach can be useful. Qualitative inquiry can be used to explore a construct's meaning in a way that informs item writing and allows the strengths of one analysis…

  8. An In Vitro Chicken Gut Model Demonstrates Transfer of a Multidrug Resistance Plasmid from Salmonella to Commensal Escherichia coli.

    Science.gov (United States)

    Card, Roderick M; Cawthraw, Shaun A; Nunez-Garcia, Javier; Ellis, Richard J; Kay, Gemma; Pallen, Mark J; Woodward, Martin J; Anjum, Muna F

    2017-07-18

    The chicken gastrointestinal tract is richly populated by commensal bacteria that fulfill various beneficial roles for the host, including helping to resist colonization by pathogens. It can also facilitate the conjugative transfer of multidrug resistance (MDR) plasmids between commensal and pathogenic bacteria which is a significant public and animal health concern as it may affect our ability to treat bacterial infections. We used an in vitro chemostat system to approximate the chicken cecal microbiota, simulate colonization by an MDR Salmonella pathogen, and examine the dynamics of transfer of its MDR plasmid harboring several genes, including the extended-spectrum beta-lactamase bla CTX-M1 We also evaluated the impact of cefotaxime administration on plasmid transfer and microbial diversity. Bacterial community profiles obtained by culture-independent methods showed that Salmonella inoculation resulted in no significant changes to bacterial community alpha diversity and beta diversity, whereas administration of cefotaxime caused significant alterations to both measures of diversity, which largely recovered. MDR plasmid transfer from Salmonella to commensal Escherichia coli was demonstrated by PCR and whole-genome sequencing of isolates purified from agar plates containing cefotaxime. Transfer occurred to seven E. coli sequence types at high rates, even in the absence of cefotaxime, with resistant strains isolated within 3 days. Our chemostat system provides a good representation of bacterial interactions, including antibiotic resistance transfer in vivo It can be used as an ethical and relatively inexpensive approach to model dissemination of antibiotic resistance within the gut of any animal or human and refine interventions that mitigate its spread before employing in vivo studies. IMPORTANCE The spread of antimicrobial resistance presents a grave threat to public health and animal health and is affecting our ability to respond to bacterial infections

  9. MATHEMATICAL MODEL OF THE RHEOLOGICAL BEHAVIOR OF VISCOPLASTIC FLUID, WHICH DEMONSTRATES THE EFFECT OF “SOLIDIFICATION”

    Directory of Open Access Journals (Sweden)

    V. N. Kolodezhnov

    2014-01-01

    Full Text Available Summary. The irregular behavior of some kinds of suspensions on the basis of polymeric compositions and fine-dispersed fractions is characterized. As a simple, one-dimensional, shearing, viscometric flow such materials demonstrate the following mechanical behavior. There is no deformation if the shear stress does not exceed a certain critical value. If this critical value is exceeded, the flow is begins. This behavior is well-known and corresponds to the rheological models of viscoplastic fluid. However, further increase in the shear rate results in “solidification”. The rheological model of such viscoplastic fluids, mechanical behavior demonstrating the “solidification” effect is offered . This model contains four empirical parameters. The impact of the exponent on the dependence of the shearing stress and effective viscosity on the shear rate in the rheological model is graphically presented. The rheological model extrapolation on the three-dimensional flow is proposed.

  10. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  11. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  12. Demonstration of clonable alloreactive host T cells in a primate model for bone marrow transplantation

    International Nuclear Information System (INIS)

    Reisner, Y.; Ben-Bassat, I.; Douer, D.; Kaploon, A.; Schwartz, E.; Ramot, B.

    1986-01-01

    The phenomenon of marrow rejection following supralethal radiochemotherapy was explained in the past mainly by non-T-cell mechanisms known to be resistant to high-dose irradiation. In the present study a low but significant number of radiochemoresistant-clonable T cells was found in the peripheral blood and spleen of Rhesus monkeys following the cytoreductive protocol used for treatment of leukemia patients prior to bone marrow transplantation. More than 95% of the clonable cells are concentrated in the spleen 5 days after transplant. The cells possess immune memory as demonstrated by the generation of alloreactive-specific cytotoxicity. The present findings suggest that host-versus-graft activity may be mediated by alloreactive T cells. It is hoped that elimination of such cells prior to bone marrow transplantation will increase the engraftment rate of HLA-nonidentical marrow in leukemia patients

  13. The Waswanipi Cree Model Forest: Demonstrating Aboriginal leadership in sustainable forest management

    Energy Technology Data Exchange (ETDEWEB)

    Jolly, A.

    1999-09-01

    Experiences of the Waswanipi Cree community in being partners in sustainable forest management are discussed. The Waswanipi Cree Model Forest was designated as such in 1997. Since then, it has come to be seen as a forum for the community to express its needs, goals and objectives for the future, and as the first opportunity for the Cree community to exercise leadership and decision-making authority related to land management issues. The Waswanipi land is situated on the southernmost tip of eastern James Bay. It extends to some 35,000 sq km, divided into 52 family hunting territories, called traplines. Each trapline has a designated custodian, who is responsible for ensuring that wildlife is harvested in a sustainable manner. Community life is organized around the traplines, although families will sometimes temporarily relocate close to paid employment opportunities. Nevertheless, the purpose of employment is always to return to the bush, with sufficient materials and supplies to last the hunting and trapping season. Prior to the designation of the Model Forest, the major problems have been the rate and extent of forestry activities on Cree land by outside timber companies, the absence of opportunities for the Cree to have a meaningful role in decisions that impacted their future and the difficulties of convincing government experts and forestry companies to allow the Cree to bring their experience-based knowledge to bear on forest resource management issues. The manner in which the new partnership resulting from the designation of the Model Forest is opening the way to better understanding, mitigation of the negative effects of forestry operations on traplines, mediation of conflicts between trappers and forestry companies with timber licences on Waswanipi land, are described as one of the major achievements of the Model Forest Program. The rate and extent of cutting continues to be a problem, however, there are signs of a growing understanding among the timber

  14. Unbiased proteomics analysis demonstrates significant variability in mucosal immune factor expression depending on the site and method of collection.

    Directory of Open Access Journals (Sweden)

    Kenzie M Birse

    Full Text Available Female genital tract secretions are commonly sampled by lavage of the ectocervix and vaginal vault or via a sponge inserted into the endocervix for evaluating inflammation status and immune factors critical for HIV microbicide and vaccine studies. This study uses a proteomics approach to comprehensively compare the efficacy of these methods, which sample from different compartments of the female genital tract, for the collection of immune factors. Matching sponge and lavage samples were collected from 10 healthy women and were analyzed by tandem mass spectrometry. Data was analyzed by a combination of differential protein expression analysis, hierarchical clustering and pathway analysis. Of the 385 proteins identified, endocervical sponge samples collected nearly twice as many unique proteins as cervicovaginal lavage (111 vs. 61 with 55% of proteins common to both (213. Each method/site identified 73 unique proteins that have roles in host immunity according to their gene ontology. Sponge samples enriched for specific inflammation pathways including acute phase response proteins (p = 3.37×10(-24 and LXR/RXR immune activation pathways (p = 8.82×10(-22 while the role IL-17A in psoriasis pathway (p = 5.98×10(-4 and the complement system pathway (p = 3.91×10(-3 were enriched in lavage samples. Many host defense factors were differentially enriched (p<0.05 between sites including known/potential antimicrobial factors (n = 21, S100 proteins (n = 9, and immune regulatory factors such as serpins (n = 7. Immunoglobulins (n = 6 were collected at comparable levels in abundance in each site although 25% of those identified were unique to sponge samples. This study demonstrates significant differences in types and quantities of immune factors and inflammation pathways collected by each sampling technique. Therefore, clinical studies that measure mucosal immune activation or factors assessing HIV transmission should utilize

  15. Dynamic Modeling and Validation of a Biomass Hydrothermal Pretreatment Process - A Demonstration Scale Study

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest

    2015-01-01

    for the enzymatic hydrolysis process. Several by-products are also formed, which disturb and act as inhibitors downstream. The objective of this study is to formulate and validate a large scale hydrothermal pretreatment dynamic model based on mass and energy balances, together with a complex conversion mechanism......Hydrothermal pretreatment of lignocellulosic biomass is a cost effective technology for second generation biorefineries. The process occurs in large horizontal and pressurized thermal reactors where the biomatrix is opened under the action of steam pressure and temperature to expose cellulose...... and kinetics. The study includes a comprehensive sensitivity and uncertainty analysis, with parameter estimation from real-data in the 178-185° range. To highlight the application utility of the model, a state estimator for biomass composition is developed. The predictions capture well the dynamic trends...

  16. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    Science.gov (United States)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  17. Demonstration of a computer model for residual radioactive material guidelines, RESRAD

    International Nuclear Information System (INIS)

    Yu, C.; Yuan, Y.C.; Zielen, A.J.; Wallo, A. III

    1989-01-01

    A computer model was developed to calculate residual radioactive material guidelines for the US Department of Energy (DOE). This model, called RESRAD, can be run on IBM or IBM-compatible microcomputer. Seven potential exposure pathways from contaminated soil are analyzed, including external radiation exposure and internal radiation exposure from inhalation and food digestion. The RESRAD code has been applied to several DOE sites to derive soil cleanup guidelines. The experience gained indicates that a comprehensive set of site-specific hydrogeologic and geochemical input parameters must be used for a realistic pathway analysis. The RESRAD code is a useful tool; it is easy to run and very user-friendly. 6 refs., 12 figs

  18. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  19. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason P.; McCabe, Matthew

    2013-01-01

    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  20. A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges.

    Science.gov (United States)

    Prein, Andreas F; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P M; Leung, Ruby

    2015-06-01

    Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing 10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.

  1. Diffuse interface methods for multiphase flow modeling

    International Nuclear Information System (INIS)

    Jamet, D.

    2004-01-01

    Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one

  2. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  3. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  4. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  5. DEMONSTRATION OF EQUIVALENCY OF CANE AND SOFTWOOD BASED CELOTEX FOR MODEL 9975 SHIPPING PACKAGES

    International Nuclear Information System (INIS)

    Watkins, R; Jason Varble, J

    2008-01-01

    Cane-based Celotex(trademark) has been used extensively in various Department of Energy (DOE) packages as a thermal insulator and impact absorber. Cane-based Celotex(trademark) fiberboard was only manufactured by Knight-Celotex Fiberboard at their Marrero Plant in Louisiana. However, Knight-Celotex Fiberboard shut down their Marrero Plant in early 2007 due to impacts from hurricane Katrina and other economic factors. Therefore, cane-based Celotex(trademark) fiberboard is no longer available for use in the manufacture of new shipping packages requiring the material as a component. Current consolidation plans for the DOE Complex require the procurement of several thousand new Model 9975 shipping packages requiring cane-based Celotex(trademark) fiberboard. Therefore, an alternative to cane-based Celotex(trademark) fiberboard is needed. Knight-Celotex currently manufactures Celotex(trademark) fiberboard from other cellulosic materials, such as hardwood and softwood. A review of the relevant literature has shown that softwood-based Celotex(trademark) meets all parameters important to the Model 9975 shipping package

  6. A pure Hubbard model with demonstrable pairing adjacent to the Mott-insulating phase

    International Nuclear Information System (INIS)

    Champion, J D; Long, M W

    2003-01-01

    We introduce a Hubbard model on a particular class of geometries, and consider the effect of doping the highly spin-degenerate Mott-insulating state with a microscopic number of holes in the extreme strong-coupling limit. The geometry is quite general, with pairs of atomic sites at each superlattice vertex, and a highly frustrated inter-atomic connectivity: the one-dimensional realization is a chain of edge-sharing tetrahedra. The sole model parameter is the ratio of intra-pair to inter-pair hopping matrix elements. If the intra-pair hopping is negligible then introducing a microscopic number of holes results in a ferromagnetic Nagaoka groundstate. Conversely, if the intra-pair hopping is comparable with the inter-pair hopping then the groundstate is low spin with short-ranged spin correlations. We exactly solve the correlated motion of a pair of holes in such a state and find that, in 1d and 2d, they form a bound pair on a length scale that increases with diminishing binding energy. This result is pertinent to the long-standing problem of hole motion in the CuO 2 planes of the high-temperature superconductors: we have rigorously shown that, on our frustrated geometry, the holes pair up and a short-ranged low-spin state is generated by hole motion alone

  7. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  8. Construction and modelling of a thermoelectric oxide module (TOM) as a demonstrator - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Tomes, P.; Weidenkaff, A.

    2010-08-15

    The project aims at the development of better thermoelectric materials for the direct conversion of solar heat into electricity. The maximum output power P{sub max} and the efficiency {eta} of the conversion was measured on a series of four-leg thermoelectric oxide modules (TOM). The modules were constructed by combining two p-type (La{sub 1.98}Sr{sub 0.02}CuO{sub 4}) and two n-type (CaMn{sub 0.98}Nb{sub 0.02}O{sub 3}) thermoelements connected electrically in series and thermally in parallel. The temperature gradient {Delta}T was provided by a High-Flux Solar Simulator source (HFSS) which generates a spectrum similar to solar radiation. This project was intended to be a feasibility study for the utilization of high temperature solar heat, which could not previously be demonstrated due to the low temperature stability of conventional materials. The direct conversion was proven by this study. The measurements show an almost linear temperature profile along the thermoelectric legs. However, the maximum output power resulted in 88.8 mW for a TOM with a leg length of 5 mm at {Delta}T = 622 K and has yet to be optimized by improving the converter design and the applied materials. The highest conversion efficiency {eta} was found for a heat flux of 4 to 8 W cm{sup -2}. The dependence of {eta} on the leg length was studied as well as the influence of a graphite coating on the hot Al{sub 2}O{sub 3} surface on {Delta}T, P{sub max} and {eta}. (authors)

  9. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  10. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration. Final Report. Volume 1

    International Nuclear Information System (INIS)

    Hines, J. Wesley; Upadhyaya, Belle R.; Doster, J. Michael; Edwards, Robert M.; Lewis, Kenneth D.; Turinsky, Paul; Coble, Jamie

    2011-01-01

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  11. Galleria mellonella infection model demonstrates high lethality of ST69 and ST127 uropathogenic E. coli.

    Directory of Open Access Journals (Sweden)

    Majed F Alghoribi

    Full Text Available Galleria mellonella larvae are an alternative in vivo model for investigating bacterial pathogenicity. Here, we examined the pathogenicity of 71 isolates from five leading uropathogenic E. coli (UPEC lineages using G. mellonella larvae. Larvae were challenged with a range of inoculum doses to determine the 50% lethal dose (LD50 and for analysis of survival outcome using Kaplan-Meier plots. Virulence was correlated with carriage of a panel of 29 virulence factors (VF. Larvae inoculated with ST69 and ST127 isolates (10(4 colony-forming units/larvae showed significantly higher mortality rates than those infected with ST73, ST95 and ST131 isolates, killing 50% of the larvae within 24 hours. Interestingly, ST131 isolates were the least virulent. We observed that ST127 isolates are significantly associated with a higher VF-score than isolates of all other STs tested (P≤0.0001, including ST69 (P<0.02, but one ST127 isolate (strain EC18 was avirulent. Comparative genomic analyses with virulent ST127 strains revealed an IS1 mediated deletion in the O-antigen cluster in strain EC18, which is likely to explain the lack of virulence in the larvae infection model. Virulence in the larvae was not correlated with serotype or phylogenetic group. This study illustrates that G. mellonella are an excellent tool for investigation of the virulence of UPEC strains. The findings also support our suggestion that the incidence of ST127 strains should be monitored, as these isolates have not yet been widely reported, but they clearly have a pathogenic potential greater than that of more widely recognised clones, including ST73, ST95 or ST131.

  12. Modelling viscoacoustic wave propagation with the lattice Boltzmann method.

    Science.gov (United States)

    Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen

    2017-08-31

    In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.

  13. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  14. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  15. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  16. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  17. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  18. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  19. 40 CFR 63.7732 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Science.gov (United States)

    2010-07-01

    ...) of this section. (i) Method 1 or 1A to select sampling port locations and the number of traverse...) Method 3, 3A, or 3B to determine the dry molecular weight of the stack gas. (iv) Method 4 to determine... of exhaust gas, dry standard cubic feet per minute (dscfm); Mcharge = Mass of metal charged during...

  20. 40 CFR 63.7322 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Science.gov (United States)

    2010-07-01

    ... appendix A to 40 CFR part 60. (i) Method 1 to select sampling port locations and the number of traverse...) Method 3, 3A, or 3B to determine the dry molecular weight of the stack gas. (iv) Method 4 to determine.... Collect a minimum sample volume of 30 dry standard cubic feet of gas during each test run. Three valid...

  1. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  2. New method dynamically models hydrocarbon fractionation

    Energy Technology Data Exchange (ETDEWEB)

    Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)

    1995-10-01

    A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.

  3. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  4. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  5. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  6. Developing energy forecasting model using hybrid artificial intelligence method

    Institute of Scientific and Technical Information of China (English)

    Shahram Mollaiy-Berneti

    2015-01-01

    An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.

  7. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  8. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  9. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  10. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    OpenAIRE

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...

  11. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  12. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a conflict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...

  13. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...

  14. Boundary element method for modelling creep behaviour

    International Nuclear Information System (INIS)

    Zarina Masood; Shah Nor Basri; Abdel Majid Hamouda; Prithvi Raj Arora

    2002-01-01

    A two dimensional initial strain direct boundary element method is proposed to numerically model the creep behaviour. The boundary of the body is discretized into quadratic element and the domain into quadratic quadrilaterals. The variables are also assumed to have a quadratic variation over the elements. The boundary integral equation is solved for each boundary node and assembled into a matrix. This matrix is solved by Gauss elimination with partial pivoting to obtain the variables on the boundary and in the interior. Due to the time-dependent nature of creep, the solution has to be derived over increments of time. Automatic time incrementation technique and backward Euler method for updating the variables are implemented to assure stability and accuracy of results. A flowchart of the solution strategy is also presented. (Author)

  15. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  16. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  17. The forward tracking, an optical model method

    CERN Document Server

    Benayoun, M

    2002-01-01

    This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.

  18. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  19. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  20. Demonstration of a modelling-based multi-criteria decision analysis procedure for prioritisation of occupational risks from manufactured nanomaterials.

    Science.gov (United States)

    Hristozov, Danail; Zabeo, Alex; Alstrup Jensen, Keld; Gottardo, Stefania; Isigonis, Panagiotis; Maccalman, Laura; Critto, Andrea; Marcomini, Antonio

    2016-11-01

    Several tools to facilitate the risk assessment and management of manufactured nanomaterials (MN) have been developed. Most of them require input data on physicochemical properties, toxicity and scenario-specific exposure information. However, such data are yet not readily available, and tools that can handle data gaps in a structured way to ensure transparent risk analysis for industrial and regulatory decision making are needed. This paper proposes such a quantitative risk prioritisation tool, based on a multi-criteria decision analysis algorithm, which combines advanced exposure and dose-response modelling to calculate margins of exposure (MoE) for a number of MN in order to rank their occupational risks. We demonstrated the tool in a number of workplace exposure scenarios (ES) involving the production and handling of nanoscale titanium dioxide, zinc oxide (ZnO), silver and multi-walled carbon nanotubes. The results of this application demonstrated that bag/bin filling, manual un/loading and dumping of large amounts of dry powders led to high emissions, which resulted in high risk associated with these ES. The ZnO MN revealed considerable hazard potential in vivo, which significantly influenced the risk prioritisation results. In order to study how variations in the input data affect our results, we performed probabilistic Monte Carlo sensitivity/uncertainty analysis, which demonstrated that the performance of the proposed model is stable against changes in the exposure and hazard input variables.

  1. Evaluation and demonstration of methods for improved fuel utilization. First semi-annual progress report, September 1979-March 1980

    International Nuclear Information System (INIS)

    Decher, U.

    1980-01-01

    Demonstrations of improved fuel management and burnup are being performed in the Fort Calhoun reactor. More efficient fuel management will be achieved through the implementation of a low leakage concept called SAVFUEL (Shimmed And Very Flexible Uranium Element Loading), which is expected to reduce uranium requirements by 2 to 4%. The burnup will be increased sufficiently to reduce uranium requirements by 5 to 15%. Four fuel assemblies scheduled to demonstrate the SAVFUEL duty cycle and loaded into the core in December 1978 were inspected visually prior to their second exposure cycle. In addition, seventeen fuel assemblies were inspected after their fourth exposure cycle having achieved assembly average burnup up to 36 GWD/T. One assembly has been reinserted into Cycle 6 for a fifth exposure cycle. The preliminary results of all visual fuel inspections which appear to show excellent fuel rod performance are presented in this report. This report also contains the results of a licensing activity which was performed to allow insertion of a highly burned assembly into the reactor for a fifth irradiation cycle

  2. 40 CFR 63.8802 - What methods must I use to demonstrate compliance with the emission limitation for loop slitter...

    Science.gov (United States)

    2010-07-01

    ... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Testing and Initial Compliance... for each material used in your foam fabrication operations, you must use one of the options in... CFR part 63). You may use Method 311 for determining the mass fraction of HAP. Use the procedures...

  3. 40 CFR 63.7822 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Science.gov (United States)

    2010-07-01

    ... select sampling port locations and the number of traverse points. Sampling ports must be located at the... determine the volumetric flow rate of the stack gas. (iii) Method 3, 3A, or 3B to determine the dry... filterable catch only). (2) Collect a minimum sample volume of 60 dry standard cubic feet (dscf) of gas...

  4. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  5. Personality assessment and model comparison with behavioral data: A statistical framework and empirical demonstration with bonobos (Pan paniscus).

    Science.gov (United States)

    Martin, Jordan S; Suarez, Scott A

    2017-08-01

    Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and

  6. Manufacture and demonstration of organic photovoltaic-powered electrochromic displays using roll coating methods and printable electrolytes

    DEFF Research Database (Denmark)

    Jensen, Jacob; Dam, Henrik Friis; Reynolds, John R.

    2012-01-01

    active material (ECP-Magenta) and poly(N-octadecyl-(propylene-1,3-dioxy)-3,4-pyrrole-2,5-diyl) as a minimally colored, charge balancing material (MCCP). Two electrolyte systems were compared to allow development of fully printable and laminated devices on flexible substrates. Devices of various sizes, up...... to 7 × 8 cm2, are demonstrated with pixelated devices containing pixel sizes of 4 × 4 mm2 or 13 × 13 mm2. The transmission contrast exhibited by the devices, when switched between the fully bleached and fully colored state, was 58% at a visible wavelength of 550 nm, and the devices exhibited switching...... times of photovoltaic devices (with or without the use of a lithium-polymer battery) to power the devices between the colored and bleached state, illustrating a self-powered ECD. © 2012 Wiley Periodicals, Inc. J Polym Sci Part B...

  7. Demonstration of multi-generational growth of tungsten nanoparticles in hydrogen plasma using in situ laser extinction method

    Science.gov (United States)

    Ouaras, K.; Lombardi, G.; Hassouni, K.

    2018-03-01

    For the first time, we demonstrate that tungsten (W) nanoparticles (NPs) are created when a tungsten target is exposed to low-pressure, high density hydrogen plasma. The plasma was generated using a novel dual plasma system combining a microwave discharge and a pulsed direct-current (DC) discharge. The tungsten surface originates in the multi-generational formation of a significant population of 30-70 nm diameter particles when the W cathode is biased at ~  -1 kV and submitted to ~1020 m2 s-1 H+/H2+ /H3+ ions flux. The evidenced NPs formation should be taking into account as one of the consequence of the plasma surface interaction outcomes, especially for fusion applications.

  8. Effect of defuzzification method of fuzzy modeling

    Science.gov (United States)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  9. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  10. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  11. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  12. Rewards of bridging the divide between measurement and clinical theory: demonstration of a bifactor model for the Brief Symptom Inventory.

    Science.gov (United States)

    Thomas, Michael L

    2012-03-01

    There is growing evidence that psychiatric disorders maintain hierarchical associations where general and domain-specific factors play prominent roles (see D. Watson, 2005). Standard, unidimensional measurement models can fail to capture the meaningful nuances of such complex latent variable structures. The present study examined the ability of the multidimensional item response theory bifactor model (see R. D. Gibbons & D. R. Hedeker, 1992) to improve construct validity by serving as a bridge between measurement and clinical theories. Archival data consisting of 688 outpatients' psychiatric diagnoses and item-level responses to the Brief Symptom Inventory (BSI; L. R. Derogatis, 1993) were extracted from files at a university mental health clinic. The bifactor model demonstrated superior fit for the internal structure of the BSI and improved overall diagnostic accuracy in the sample (73%) compared with unidimensional (61%) and oblique simple structure (65%) models. Consistent with clinical theory, multiple sources of item variance were drawn from individual test items. Test developers and clinical researchers are encouraged to consider model-based measurement in the assessment of psychiatric distress.

  13. A Demonstration using Low-kt Fatigue Specimens of a Method for Predicting the Fatigue Behaviour of Corroded Aircraft Components

    Science.gov (United States)

    2013-03-01

    predictions of infinite life, i.e. runouts . For this reason the NASGRO dataset was not used in the Criticality Model. UNCLASSIFIED DSTO-RR-0390...JSM-6490 SEM at DSTO. The fracture surfaces of the specimens were removed using an abrasive cut-off wheel , cleaned using water and analytical grade...Pitting Bolthole in NASA Space Shuttle wheels 7075-T6 EDM EDM Low-kt fatigue specimen Wei [133] 2024-T3/Thickness not stated 500 h in 0.5M

  14. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    International Nuclear Information System (INIS)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS 'pathways,' or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  15. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herberger, Sarah Elizabeth Marie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  16. Implementation of a Sage-Based Stirling Model Into a System-Level Numerical Model of the Fission Power System Technology Demonstration Unit

    Science.gov (United States)

    Briggs, Maxwell H.

    2011-01-01

    The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.

  17. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  18. Determining uranium speciation in contaminated soils by molecular spectroscopic methods: Examples from the Uranium in Soils Integrated Demonstration

    International Nuclear Information System (INIS)

    Allen, P.G.; Berg, J.M.; Chisholm-Brause, C.J.; Conradson, S.D.; Donohoe, R.J.; Morris, D.E.; Musgrave, J.A.; Tait, C.D.

    1994-01-01

    The US Department of Energy's former uranium production facility located at Fernald, OH (18 mi NW of Cincinnati) is the host site for an Integrated Demonstration for remediation of uranium-contaminated soils. A wide variety of source terms for uranium contamination have been identified reflecting the diversity of operations at the facility. Most of the uranium contamination is contained in the top ∼1/2 m of soil, but uranium has been found in perched waters indicating substantial migration. In support of the development of remediation technologies and risk assessment, we are conducting uranium speciation studies on untreated and treated soils using molecular spectroscopies. Untreated soils from five discrete sites have been analyzed. We have found that ∼80--90% of the uranium exists as hexavalent UO 2 2+ species even though many source terms consisted of tetravalent uranium species such as UO 2 . Much of the uranium exists as microcrystalline precipitates (secondary minerals). There is also clear evidence for variations in uranium species from the microscopic to the macroscopic scale. However, similarities in speciation at sites having different source terms suggest that soil and groundwater chemistry may be as important as source term in defining the uranium speciation in these soils. Characterization of treated soils has focused on materials from two sites that have undergone leaching using conventional extractants (e.g., carbonate, citrate) or novel chelators such as Tiron. Redox reagents have also been used to facilitate the leaching process. Three different classes of treated soils have been identified based on the speciation of uranium remaining in the soils. In general, the effective treatments decrease the total uranium while increasing the ratio of U(IV) to U(VI) species

  19. Full-scale demonstration of EBS construction technology I. Block, pellet and in-situ compaction method

    International Nuclear Information System (INIS)

    Toguri, Satohito; Asano, Hidekazu; Takao, Hajime; Matsuda, Takeshi; Amemiya, Kiyoshi

    2008-01-01

    (i) Bentonite Block: Applicability of manufacturing technology of buffer material was verified by manufacturing of full scale bentonite ring which consists of one-eight (1/8) dividing block (Outside Diameter (OD): 2.220 mm H: 300 mm). Density characteristic, dimension and scale effect, which were considered the tunnel environment under transportation, were evaluated. Vacuum suction technology was selected as handling technology for the ring. Hoisting characteristic of vacuum suction technology was presented through evaluation of the mechanical property of buffer material, the friction between blocks, etc. by using a full-scale bentonite ring (OD 2.200 mm, H 300 mm). And design of bentonite block and emplacement equipment were presented in consideration of manufacturability of the block, stability of handling and improvement of emplacement efficiency. (ii) Bentonite Pellet Filling: Basic characteristics such as water penetration, swelling and thermal conductivity of various kinds of bentonite pellet were collected by laboratory scale tests. Applicability of pellet filling technology was evaluated by horizontal filling test using a simulated full-scale drift tunnel (OD 2.200 mm, L 6 m) . Filling density, grain size distribution, etc. were also measured. (iii) In-Situ Compaction of Bentonite: Dynamic compaction method (heavy weight fall method) was selected as in-situ compaction technology. Compacting examination which used a full scale disposal pit (OD 2.360 mm) was carried out. Basic specification of compacting equipment and applicability of in-situ compaction technology were presented. Density, density distribution of buffer material and energy acted on the wall of the pit, were also measured. (author)

  20. FDTD method and models in optical education

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe

    2017-08-01

    In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.

  1. A near-real-time material accountancy model and its preliminary demonstration in the Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ikawa, K.; Ihara, H.; Nishimura, H.; Tsutsumi, M.; Sawahata, T.

    1983-01-01

    The study of a near-real-time (n.r.t.) material accountancy system as applied to small or medium-sized spent fuel reprocessing facilities has been carried out since 1978 under the TASTEX programme. In this study, a model of the n.r.t. accountancy system, called the ten-day-detection-time model, was developed and demonstrated in the actual operating plant. The programme was closed on May 1981, but the study has been extended. The effectiveness of the proposed n.r.t. accountancy model was evaluated by means of simulation techniques. The results showed that weekly material balances covering the entire process MBA could provide sufficient information to satisfy the IAEA guidelines for small or medium-sized facilities. The applicability of the model to the actual plant has been evaluated by a series of field tests which covered four campaigns. In addition to the material accountancy data, many valuable operational data with regard to additional locations for an in-process inventory, the time needed for an in-process inventory, etc., have been obtained. A CUMUF (cumulative MUF) chart of the resulting MUF data in the C-1 and C-2 campaigns clearly showed that there had been a measurement bias across the process MBA. This chart gave a dramatic picture of the power of the n.r.t. accountancy concept by showing the nature of this bias, which was not clearly shown in the conventional material accountancy data. (author)

  2. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L.

    1983-01-01

    An apparatus is described in which effects of pressure, volume, and temperature changes on a gas can be observed simultaneously. Includes use of the apparatus in demonstrating Boyle's, Gay-Lussac's, and Charles' Laws, attractive forces, Dalton's Law of Partial pressures, and in illustrating measurable vapor pressures of liquids and some solids.…

  3. Demonstration of a multiscale modeling technique: prediction of the stress–strain response of light activated shape memory polymers

    International Nuclear Information System (INIS)

    Beblo, Richard V; Weiland, Lisa Mauck

    2010-01-01

    Presented is a multiscale modeling method applied to light activated shape memory polymers (LASMPs). LASMPs are a new class of shape memory polymer (SMPs) being developed for adaptive structures applications where a thermal stimulus is undesirable. LASMP developmental emphasis is placed on optical manipulation of Young's modulus. A multiscale modeling approach is employed to anticipate the soft and hard state moduli solely on the basis of a proposed molecular formulation. Employing such a model shows promise for expediting down-selection of favorable formulations for synthesis and testing, and subsequently accelerating LASMP development. An empirical adaptation of the model is also presented which has applications in system design once a formulation has been identified. The approach employs rotational isomeric state theory to build a molecular scale model of the polymer chain yielding a list of distances between the predicted crosslink locations, or r-values. The r-values are then fitted with Johnson probability density functions and used with Boltzmann statistical mechanics to predict stress as a function of the strain of the phantom polymer network. Empirical adaptation for design adds junction constraint theory to the modeling process. Junction constraint theory includes the effects of neighboring chain interactions. Empirical fitting results in numerically accurate Young's modulus predictions. The system is modular in nature and thus lends itself well to being adapted to other polymer systems and development applications

  4. Method for customizing an organic Rankine cycle to a complex heat source for efficient energy conversion, demonstrated on a Fischer Tropsch plant

    International Nuclear Information System (INIS)

    DiGenova, Kevin J.; Botros, Barbara B.; Brisson, J.G.

    2013-01-01

    Highlights: ► Methods for customizing organic Rankine cycles are proposed. ► A set of cycle modifications help to target available heat sources. ► Heat sources with complex temperature–enthalpy profiles can be matched. ► Significant efficiency improvements can be achieved over basic ORC’s. -- Abstract: Organic Rankine cycles (ORCs) provide an alternative to traditional steam Rankine cycles for the conversion of low grade heat sources into power, where conventional steam power cycles are known to be inefficient. A large processing plant often has multiple low temperature waste heat streams available for conversion to electricity by a low temperature cycle, resulting in a composite heat source with a complex temperature–enthalpy profile. This work presents a set of ORC design concepts: reheat stages, multiple pressure levels, and balanced recuperators; and demonstrates the use of these design concepts as building blocks to create a customized cycle that matches an available heat source. Organic fluids are modeled using a pure substance database. The pinch analysis technique of forming composite curves is applied to analyze the effect of each building block on the temperature–enthalpy profile of the ORC heat requirement. The customized cycle is demonstrated on a heat source derived from a Fischer Tropsch reactor and its associated processes. Analysis shows a steam Rankine cycle can achieve a 20.6% conversion efficiency for this heat source, whereas a simple organic Rankine cycle using hexane as the working fluid can achieve a 20.9% conversion efficiency. If the ORC building blocks are combined into a cycle targeted to match the temperature–enthalpy profile of the heat source, this customized ORC can achieve 28.5% conversion efficiency.

  5. Character expansion methods for matrix models of dually weighted graphs

    International Nuclear Information System (INIS)

    Kazakov, V.A.; Staudacher, M.; Wynter, T.

    1996-01-01

    We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphs possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problem of phase transitions from random to flat lattices. (orig.). With 4 figs

  6. INEL cold test pit demonstration of improvements in information derived from non-intrusive geophysical methods over buried waste sites

    International Nuclear Information System (INIS)

    1994-01-01

    Under contract between US DOE Idaho National Engineering Laboratory (INEL) and the Blackhawk Geosciences Division of Coleman Research Corporation (BGD-CRC), geophysical investigations were conducted to improve the detection of buried wastes. Over the Cold Test Pit (CTP) at INEL, data were acquired with multiple sensors on a dense grid. Over the CTP the interpretations inferred from geophysical data are compared with the known placement of various waste forms in the pit. The geophysical sensors employed were magnetics, frequency and time domain electromagnetics, and ground penetrating radar. Also, because of the high data density acquired, filtering and other data processing and imaging techniques were tested. After completion and analysis of the survey and interpretation over the CTP, the second phase of investigation consisted of testing geophysical methods over the Idaho Chemical Processing Plant (ICPP). The sections of the ICPP surveyed are underlain by a complex network of buried utility lines of different dimensions and composition, and with placement at various depths up to 13 ft. Further complications included many metallic objects at the surface, such as buildings, reinforced concrete pads, and debris. Although the multiple geophysical sensor approach mapped many buried utilities, they mapped far from all utilities shown on the facility drawings. This report consists of data collected from these geophysical surveys over the ICPP

  7. INEL cold test pit demonstration of improvements in information derived from non-intrusive geophysical methods over buried waste sites

    International Nuclear Information System (INIS)

    1993-01-01

    The objectives of this research project were to lay the foundation for further improvement in the use of geophysical methods for detection of buried wastes, and to increase the information content derived from surveys. Also, an important goal was to move from mere detection to characterization of buried wastes. The technical approach to achieve these objectives consisted of: (1) Collect a data set of high spatial density; (2) Acquire data with multiple sensors and integrate the interpretations inferred from the various sensors; (3) Test a simplified time domain electromagnetic system; and (4) Develop imaging and display formats of geophysical data readily understood by environmental scientists and engineers. The breadth of application of this work is far reaching. Not only are uncontrolled waste pits and trenches, abandoned underground storage tanks, and pipelines found throughout most US DOE facilities, but also at military installations and industrial facilities. Moreover, controlled land disposal sites may contain ''hot spots'' where drums and hazardous material may have been buried. The technologies addressed by the R ampersand D will benefit all of these activities

  8. ADvanced IMage Algebra (ADIMA): a novel method for depicting multiple sclerosis lesion heterogeneity, as demonstrated by quantitative MRI.

    Science.gov (United States)

    Yiannakas, Marios C; Tozer, Daniel J; Schmierer, Klaus; Chard, Declan T; Anderson, Valerie M; Altmann, Daniel R; Miller, David H; Wheeler-Kingshott, Claudia A M

    2013-05-01

    There are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans. To determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images. We obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML. Our study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively. ADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished.

  9. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  10. A model for the training effects in swimming demonstrates a strong relationship between parasympathetic activity, performance and index of fatigue.

    Directory of Open Access Journals (Sweden)

    Sébastien Chalencon

    Full Text Available Competitive swimming as a physical activity results in changes to the activity level of the autonomic nervous system (ANS. However, the precise relationship between ANS activity, fatigue and sports performance remains contentious. To address this problem and build a model to support a consistent relationship, data were gathered from national and regional swimmers during two 30 consecutive-week training periods. Nocturnal ANS activity was measured weekly and quantified through wavelet transform analysis of the recorded heart rate variability. Performance was then measured through a subsequent morning 400 meters freestyle time-trial. A model was proposed where indices of fatigue were computed using Banister's two antagonistic component model of fatigue and adaptation applied to both the ANS activity and the performance. This demonstrated that a logarithmic relationship existed between performance and ANS activity for each subject. There was a high degree of model fit between the measured and calculated performance (R(2=0.84±0.14,p<0.01 and the measured and calculated High Frequency (HF power of the ANS activity (R(2=0.79±0.07, p<0.01. During the taper periods, improvements in measured performance and measured HF were strongly related. In the model, variations in performance were related to significant reductions in the level of 'Negative Influences' rather than increases in 'Positive Influences'. Furthermore, the delay needed to return to the initial performance level was highly correlated to the delay required to return to the initial HF power level (p<0.01. The delay required to reach peak performance was highly correlated to the delay required to reach the maximal level of HF power (p=0.02. Building the ANS/performance identity of a subject, including the time to peak HF, may help predict the maximal performance that could be obtained at a given time.

  11. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  12. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  13. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  14. Modern Methods for Modeling Change in Obesity Research in Nursing.

    Science.gov (United States)

    Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E

    2017-08-01

    Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.

  15. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  16. Modelling the decadal trend of ecosystem carbon fluxes demonstrates the important role of functional changes in a temperate deciduous forest

    DEFF Research Database (Denmark)

    Wu, Jian; Jansson, P.E.; van der Linden, Leon

    2013-01-01

    Temperate forests are globally important carbon sinks and stocks. Trends in net ecosystem exchange have been observed in a Danish beech forest and this trend cannot be entirely attributed to changing climatic drivers. This study sought to clarify the mechanisms responsible for the observed trend...... for nitrogen demand during mast years is supported by the inter-annual variability in the estimated parameters. The inter-annual variability of photosynthesis parameters was fundamental to the simulation of the trend in carbon fluxes in the investigated beech forest and this demonstrates the importance......, the latent and sensible heat fluxes and the CO2 fluxes decreased the parameter uncertainty considerably compared to using CO2 fluxes as validation data alone. The fitted model was able to simulate the observed carbon fluxes well (R2=0.8, mean error=0.1gCm−2d−1) but did not reproduce the decadal (1997...

  17. Biophysical modeling of high field diffusion MRI demonstrates micro-structural aberration in chronic mild stress rat brain

    DEFF Research Database (Denmark)

    Khan, Ahmad Raza; Chuhutin, Andrey; Wiborg, Ove

    2016-01-01

    anhedonia is considered to be a realistic model of depression in studies of animal subjects. Stereological and neuronal tracing techniques have demonstrated persistent remodeling of microstructure in hippocampus, prefrontal cortex and amygdala of CMS brains. Recent developments in diffusion MRI (d...... microstructure in the hippocampus, prefrontal cortex, caudate putamen and amygdala regions of CMS rat brains by comparison to brains from normal controls. To validate findings of CMS induced microstructural alteration, histology was performed to determine neurite, nuclear and astrocyte density. d-MRI based...... neurite density and tensor-based mean kurtosis (MKT) were significantly higher, while mean diffusivity (MD), extracellular diffusivity (Deff) and intra-neurite diffusivity(DL) were significantly lower in the amygdala of CMS rat brains. Deff was also significantly lower in the hippocampus and caudate...

  18. Variations in virulence of avian pathogenic Escherichia coli demonstrated by the use of a new in vivo infection model

    DEFF Research Database (Denmark)

    Pors, Susanne Elisabeth; Olsen, Rikke Heidemann; Christensen, Jens Peter

    2014-01-01

    , E. coli was found in pure culture from one or more positions in the oviduct and the liver. Birds receiving sterile broth did not culture positive and demonstrated no gross lesions. Subsequently, 19 birds were inoculated with an isolate of E. coli ST95 and 20 birds with an isolate of E. coli ST141....... Major variation in virulence was observed between the two isolates used in relation to clinical signs, gross lesions and histopathology. In contrast to E. coli ST141, E. coli ST95 caused severe clinical signs, epithelial necrosis of the oviduct and purulent salpingitis. The results of the study show...... the potential of the model in studies of the pathogenesis of infections and virulence of bacteria of the oviduct....

  19. Demonstrating the Uneven Importance of Fine-Scale Forest Structure on Snow Distributions using High Resolution Modeling

    Science.gov (United States)

    Broxton, P. D.; Harpold, A. A.; van Leeuwen, W.; Biederman, J. A.

    2016-12-01

    Quantifying the amount of snow in forested mountainous environments, as well as how it may change due to warming and forest disturbance, is critical given its importance for water supply and ecosystem health. Forest canopies affect snow accumulation and ablation in ways that are difficult to observe and model. Furthermore, fine-scale forest structure can accentuate or diminish the effects of forest-snow interactions. Despite decades of research demonstrating the importance of fine-scale forest structure (e.g. canopy edges and gaps) on snow, we still lack a comprehensive understanding of where and when forest structure has the largest impact on snowpack mass and energy budgets. Here, we use a hyper-resolution (1 meter spatial resolution) mass and energy balance snow model called the Snow Physics and Laser Mapping (SnowPALM) model along with LIDAR-derived forest structure to determine where spatial variability of fine-scale forest structure has the largest influence on large scale mass and energy budgets. SnowPALM was set up and calibrated at sites representing diverse climates in New Mexico, Arizona, and California. Then, we compared simulations at different model resolutions (i.e. 1, 10, and 100 m) to elucidate the effects of including versus not including information about fine scale canopy structure. These experiments were repeated for different prescribed topographies (i.e. flat, 30% slope north, and south-facing) at each site. Higher resolution simulations had more snow at lower canopy cover, with the opposite being true at high canopy cover. Furthermore, there is considerable scatter, indicating that different canopy arrangements can lead to different amounts of snow, even when the overall canopy coverage is the same. This modeling is contributing to the development of a high resolution machine learning algorithm called the Snow Water Artificial Network (SWANN) model to generate predictions of snow distributions over much larger domains, which has implications

  20. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  1. A transgenic Drosophila model demonstrates that the Helicobacter pylori CagA protein functions as a eukaryotic Gab adaptor.

    Directory of Open Access Journals (Sweden)

    Crystal M Botham

    2008-05-01

    Full Text Available Infection with the human gastric pathogen Helicobacter pylori is associated with a spectrum of diseases including gastritis, peptic ulcers, gastric adenocarcinoma, and gastric mucosa-associated lymphoid tissue lymphoma. The cytotoxin-associated gene A (CagA protein of H. pylori, which is translocated into host cells via a type IV secretion system, is a major risk factor for disease development. Experiments in gastric tissue culture cells have shown that once translocated, CagA activates the phosphatase SHP-2, which is a component of receptor tyrosine kinase (RTK pathways whose over-activation is associated with cancer formation. Based on CagA's ability to activate SHP-2, it has been proposed that CagA functions as a prokaryotic mimic of the eukaryotic Grb2-associated binder (Gab adaptor protein, which normally activates SHP-2. We have developed a transgenic Drosophila model to test this hypothesis by investigating whether CagA can function in a well-characterized Gab-dependent process: the specification of photoreceptors cells in the Drosophila eye. We demonstrate that CagA expression is sufficient to rescue photoreceptor development in the absence of the Drosophila Gab homologue, Daughter of Sevenless (DOS. Furthermore, CagA's ability to promote photoreceptor development requires the SHP-2 phosphatase Corkscrew (CSW. These results provide the first demonstration that CagA functions as a Gab protein within the tissue of an organism and provide insight into CagA's oncogenic potential. Since many translocated bacterial proteins target highly conserved eukaryotic cellular processes, such as the RTK signaling pathway, the transgenic Drosophila model should be of general use for testing the in vivo function of bacterial effector proteins and for identifying the host genes through which they function.

  2. Adult Brtl/+ mouse model of osteogenesis imperfecta demonstrates anabolic response to sclerostin antibody treatment with increased bone mass and strength.

    Science.gov (United States)

    Sinder, B P; White, L E; Salemi, J D; Ominsky, M S; Caird, M S; Marini, J C; Kozloff, K M

    2014-08-01

    Treatments to reduce fracture rates in adults with osteogenesis imperfecta are limited. Sclerostin antibody, developed for treating osteoporosis, has not been explored in adults with OI. This study demonstrates that treatment of adult OI mice respond favorably to sclerostin antibody therapy despite retention of the OI-causing defect. Osteogenesis imperfecta (OI) is a heritable collagen-related bone dysplasia, characterized by brittle bones with increased fracture risk. Although OI fracture risk is greatest before puberty, adults with OI remain at risk of fracture. Antiresorptive bisphosphonates are commonly used to treat adult OI, but have shown mixed efficacy. New treatments which consistently improve bone mass throughout the skeleton may improve patient outcomes. Neutralizing antibodies to sclerostin (Scl-Ab) are a novel anabolic therapy that have shown efficacy in preclinical studies by stimulating bone formation via the canonical wnt signaling pathway. The purpose of this study was to evaluate Scl-Ab in an adult 6 month old Brtl/+ model of OI that harbors a typical heterozygous OI-causing Gly > Cys substitution on Col1a1. Six-month-old WT and Brtl/+ mice were treated with Scl-Ab (25 mg/kg, 2×/week) or Veh for 5 weeks. OCN and TRACP5b serum assays, dynamic histomorphometry, microCT and mechanical testing were performed. Adult Brtl/+ mice demonstrated a strong anabolic response to Scl-Ab with increased serum osteocalcin and bone formation rate. This anabolic response led to improved trabecular and cortical bone mass in the femur. Mechanical testing revealed Scl-Ab increased Brtl/+ femoral stiffness and strength. Scl-Ab was successfully anabolic in an adult Brtl/+ model of OI.

  3. Huffman and linear scanning methods with statistical language models.

    Science.gov (United States)

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  4. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  5. Dynamic metabolism modelling of urban water services--demonstrating effectiveness as a decision-support tool for Oslo, Norway.

    Science.gov (United States)

    Venkatesh, G; Sægrov, Sveinung; Brattebø, Helge

    2014-09-15

    Urban water services are challenged from many perspectives and different stakeholders demand performance improvements along economic, social and environmental dimensions of sustainability. In response, urban water utilities systematically give more attention to criteria such as water safety, climate change adaptation and mitigation, environmental life cycle assessment (LCA), total cost efficiency, and on how to improve their operations within the water-energy-carbon nexus. The authors of this paper collaborated in the development of a 'Dynamic Metabolism Model' (DMM). The model is developed for generic use in the sustainability assessment of urban water services, and it has been initially tested for the city of Oslo, Norway. The purpose has been to adopt a holistic systemic perspective to the analysis of metabolism and environmental impacts of resource flows in urban water and wastewater systems, in order to offer a tool for the examination of future strategies and intervention options in such systems. This paper describes the model and its application to the city of Oslo for the analysis time period 2013-2040. The external factors impacting decision-making and interventions are introduced along with realistic scenarios developed for the testing, after consultation with officials at the Oslo Water and Wastewater Works (Norway). Possible interventions that the utility intends to set in motion are defined and numerically interpreted for incorporation into the model, and changes in the indicator values over the time period are determined. This paper aims to demonstrate the effectiveness and usefulness of the DMM, as a decision-support tool for water-wastewater utilities. The scenarios considered and interventions identified do not include all possible scenarios and interventions that can be relevant for water-wastewater utilities. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain

    Directory of Open Access Journals (Sweden)

    Nemanja Rajković

    2017-01-01

    Full Text Available This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  7. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain.

    Science.gov (United States)

    Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša

    2017-01-01

    This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  8. A novel human model of the neurodegenerative disease GM1 gangliosidosis using induced pluripotent stem cells demonstrates inflammasome activation.

    Science.gov (United States)

    Son, Mi-Young; Kwak, Jae Eun; Seol, Binna; Lee, Da Yong; Jeon, Hyejin; Cho, Yee Sook

    2015-09-01

    GM1 gangliosidosis (GM1) is an inherited neurodegenerative disorder caused by mutations in the lysosomal β-galactosidase (β-gal) gene. Insufficient β-gal activity leads to abnormal accumulation of GM1 gangliosides in tissues, particularly in the central nervous system, resulting in progressive neurodegeneration. Here, we report an in vitro human GM1 model, based on induced pluripotent stem cell (iPSC) technology. Neural progenitor cells differentiated from GM1 patient-derived iPSCs (GM1-NPCs) recapitulated the biochemical and molecular phenotypes of GM1, including defective β-gal activity and increased lysosomes. Importantly, the characterization of GM1-NPCs established that GM1 is significantly associated with the activation of inflammasomes, which play a critical role in the pathogenesis of various neurodegenerative diseases. Specific inflammasome inhibitors potently alleviated the disease-related phenotypes of GM1-NPCs in vitro and in vivo. Our data demonstrate that GM1-NPCs are a valuable in vitro human GM1 model and suggest that inflammasome activation is a novel target pathway for GM1 drug development. Copyright © 2015 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  9. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  10. On Lack of Robustness in Hydrological Model Development Due to Absence of Guidelines for Selecting Calibration and Evaluation Data: Demonstration for Data-Driven Models

    Science.gov (United States)

    Zheng, Feifei; Maier, Holger R.; Wu, Wenyan; Dandy, Graeme C.; Gupta, Hoshin V.; Zhang, Tuqiao

    2018-02-01

    Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.

  11. Industry Application ECCS / LOCA Integrated Cladding/Emergency Core Cooling System Performance: Demonstration of LOTUS-Baseline Coupled Analysis of the South Texas Plant Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Szilard, Ronaldo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); Parisi, Carlo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Vaghetto, Rodolfo [Texas A & M Univ., College Station, TX (United States); Vanni, Alessandro [Texas A & M Univ., College Station, TX (United States); Neptune, Kaleb [Texas A & M Univ., College Station, TX (United States)

    2017-06-01

    Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance during LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.

  12. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  13. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  14. A pilot Virtual Observatory (pVO) for integrated catchment science - Demonstration of national scale modelling of hydrology and biogeochemistry (Invited)

    Science.gov (United States)

    Freer, J. E.; Bloomfield, J. P.; Johnes, P. J.; MacLeod, C.; Reaney, S.

    2010-12-01

    There are many challenges in developing effective and integrated catchment management solutions for hydrology and water quality issues. Such solutions should ideally build on current scientific evidence to inform policy makers and regulators and additionally allow stakeholders to take ownership of local and/or national issues, in effect bringing together ‘communities of practice’. A strategy being piloted in the UK as the Pilot Virtual Observatory (pVO), funded by NERC, is to demonstrate the use of cyber-infrastructure and cloud computing resources to investigate better methods of linking data and models and to demonstrate scenario analysis for research, policy and operational needs. The research will provide new ways the scientific and stakeholder communities come together to exploit current environmental information, knowledge and experience in an open framework. This poster presents the project scope and methodologies for the pVO work dealing with national modelling of hydrology and macro-nutrient biogeochemistry. We evaluate the strategies needed to robustly benchmark our current predictive capability of these resources through ensemble modelling. We explore the use of catchment similarity concepts to understand if national monitoring programs can inform us about the behaviour of catchments. We discuss the challenges to applying these strategies in an open access and integrated framework and finally we consider the future for such virtual observatory platforms for improving the way we iteratively improve our understanding of catchment science.

  15. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  16. Innovation in health economic modelling of service improvements for longer-term depression: demonstration in a local health community.

    Science.gov (United States)

    Tosh, Jonathan; Kearns, Ben; Brennan, Alan; Parry, Glenys; Ricketts, Thomas; Saxon, David; Kilgarriff-Foster, Alexis; Thake, Anna; Chambers, Eleni; Hutten, Rebecca

    2013-04-26

    The purpose of the analysis was to develop a health economic model to estimate the costs and health benefits of alternative National Health Service (NHS) service configurations for people with longer-term depression. Modelling methods were used to develop a conceptual and health economic model of the current configuration of services in Sheffield, England for people with longer-term depression. Data and assumptions were synthesised to estimate cost per Quality Adjusted Life Years (QALYs). Three service changes were developed and resulted in increased QALYs at increased cost. Versus current care, the incremental cost-effectiveness ratio (ICER) for a self-referral service was £11,378 per QALY. The ICER was £2,227 per QALY for the dropout reduction service and £223 per QALY for an increase in non-therapy services. These results were robust when compared to current cost-effectiveness thresholds and accounting for uncertainty. Cost-effective service improvements for longer-term depression have been identified. Also identified were limitations of the current evidence for the long term impact of services.

  17. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  18. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  19. Creating a testing field where delta technology and water innovations are tested and demonstrated with the help of citizen science methods

    Science.gov (United States)

    de Vries, Sandra; Rutten, Martine; de Vries, Liselotte; Anema, Kim; Klop, Tanja; Kaspersma, Judith

    2017-04-01

    In highly populated deltas, much work is to be done. Complex problems ask for new and knowledge driven solutions. Innovations in delta technology and water can bring relief to managing the water rich urban areas. Testing fields form a fundamental part of the knowledge valorisation for such innovations. In such testing fields, product development by start-ups is coupled with researchers, thus supplying new scientific insights. With the help of tests, demonstrations and large-scale applications by the end-users, these innovations find their way to the daily practices of delta management. More and more cities embrace the concept of Smart Cities to tackle the ongoing complexity of urban problems and to manage the city's assets - such as its water supply networks and other water management infrastructure. Through the use of new technologies and innovative systems, data are collected from and with citizens and devices - then processed and analysed. The information and knowledge gathered are keys to enabling a better quality of life. By testing water innovations together with citizens in order to find solutions for water management problems, not only highly spatial amounts of data are provided by and/or about these innovations, they are also improved and demonstrated to the public. A consortium consisting of a water authority, a science centre, a valorisation program and two universities have joined forces to create a testing field for delta technology and water innovations using citizen science methods. In this testing field, the use of citizen science for water technologies is researched and validated by facilitating pilot projects. In these projects, researchers, start-ups and citizens work together to find the answer to present-day water management problems. The above mentioned testing field tests the use of crowd-sourcing data as for example hydrological model inputs, or to validate remote sensing applications, or improve water management decisions. Currently the

  20. Organic Tank Safety Project: development of a method to measure the equilibrium water content of Hanford organic tank wastes and demonstration of method on actual waste

    International Nuclear Information System (INIS)

    Scheele, R.D.; Bredt, P.R.; Sell, R.L.

    1996-09-01

    Some of Hanford's underground waste storage tanks contain Organic- bearing high level wastes that are high priority safety issues because of potentially hazardous chemical reactions of organics with inorganic oxidants in these wastes such as nitrates and nitrites. To ensure continued safe storage of these wastes, Westinghouse Hanford Company has placed affected tanks on the Organic Watch List and manages them under special rules. Because water content has been identified as the most efficient agent for preventing a propagating reaction and is an integral part of the criteria developed to ensure continued safe storage of Hanford's organic-bearing radioactive tank wastes, as part of the Organic Tank Safety Program the Pacific Northwest National Laboratory developed and demonstrated a simple and easily implemented procedure to determine the equilibrium water content of these potentially reactive wastes exposed to the range of water vapor pressures that might be experienced during the wastes' future storage. This work focused on the equilibrium water content and did not investigate the various factors such as at sign ventilation, tank surface area, and waste porosity that control the rate that the waste would come into equilibrium, with either the average Hanford water partial pressure 5.5 torr or other possible water partial pressures

  1. Organic Tank Safety Project: development of a method to measure the equilibrium water content of Hanford organic tank wastes and demonstration of method on actual waste

    Energy Technology Data Exchange (ETDEWEB)

    Scheele, R.D.; Bredt, P.R.; Sell, R.L.

    1996-09-01

    Some of Hanford`s underground waste storage tanks contain Organic- bearing high level wastes that are high priority safety issues because of potentially hazardous chemical reactions of organics with inorganic oxidants in these wastes such as nitrates and nitrites. To ensure continued safe storage of these wastes, Westinghouse Hanford Company has placed affected tanks on the Organic Watch List and manages them under special rules. Because water content has been identified as the most efficient agent for preventing a propagating reaction and is an integral part of the criteria developed to ensure continued safe storage of Hanford`s organic-bearing radioactive tank wastes, as part of the Organic Tank Safety Program the Pacific Northwest National Laboratory developed and demonstrated a simple and easily implemented procedure to determine the equilibrium water content of these potentially reactive wastes exposed to the range of water vapor pressures that might be experienced during the wastes` future storage. This work focused on the equilibrium water content and did not investigate the various factors such as @ ventilation, tank surface area, and waste porosity that control the rate that the waste would come into equilibrium, with either the average Hanford water partial pressure 5.5 torr or other possible water partial pressures.

  2. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  3. Methods of Medical Guidelines Modelling in GLIF.

    Czech Academy of Sciences Publication Activity Database

    Buchtela, David; Anger, Z.; Peleška, Jan (ed.); Tomečková, Marie; Veselý, Arnošt; Zvárová, Jana

    2005-01-01

    Roč. 11, - (2005), s. 1529-1532 ISSN 1727-1983. [EMBEC'05. European Medical and Biomedical Conference /3./. Prague, 20.11.2005-25.11.2005] Institutional research plan: CEZ:AV0Z10300504 Keywords : medical guidelines * knowledge modelling * GLIF model Subject RIV: BD - Theory of Information

  4. Report for the ASC CSSE L2 Milestone (4873) - Demonstration of Local Failure Local Recovery Resilient Programming Model.

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen; Teranishi, Keita

    2014-06-01

    Recovery from process loss during the execution of a distributed memory parallel application is presently achieved by restarting the program, typically from a checkpoint file. Future computer system trends indicate that the size of data to checkpoint, the lack of improvement in parallel file system performance and the increase in process failure rates will lead to situations where checkpoint restart becomes infeasible. In this report we describe and prototype the use of a new application level resilient computing model that manages persistent storage of local state for each process such that, if a process fails, recovery can be performed locally without requiring access to a global checkpoint file. LFLR provides application developers with an ability to recover locally and continue application execution when a process is lost. This report discusses what features are required from the hardware, OS and runtime layers, and what approaches application developers might use in the design of future codes, including a demonstration of LFLR-enabled MiniFE code from the Matenvo mini-application suite.

  5. A new assessment method for demonstrating the sufficiency of the safety assessment and the safety margins of the geological disposal system

    International Nuclear Information System (INIS)

    Ohi, Takao; Kawasaki, Daisuke; Chiba, Tamotsu; Takase, Toshio; Hane, Koji

    2013-01-01

    A new method for demonstrating the sufficiency of the safety assessment and safety margins of the geological disposal system has been developed. The method is based on an existing comprehensive sensitivity analysis method and can systematically identify the successful conditions, under which the dose rate does not exceed specified safety criteria, using analytical solutions for nuclide migration and the results of a statistical analysis. The successful conditions were identified using three major variables. Furthermore, the successful conditions at the level of factors or parameters were obtained using relational equations between the variables and the factors or parameters making up these variables. In this study, the method was applied to the safety assessment of the geological disposal of transuranic waste in Japan. Based on the system response characteristics obtained from analytical solutions and on the successful conditions, the classification of the analytical conditions, the sufficiency of the safety assessment and the safety margins of the disposal system were then demonstrated. A new assessment procedure incorporating this method into the existing safety assessment approach is proposed in this study. Using this procedure, it is possible to conduct a series of safety assessment activities in a logical manner. (author)

  6. Fluid Methods for Modeling Large, Heterogeneous Networks

    National Research Council Canada - National Science Library

    Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal

    2005-01-01

    .... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...

  7. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  8. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  9. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  10. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on

  11. Reduced Order Modeling Methods for Turbomachinery Design

    Science.gov (United States)

    2009-03-01

    and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En

  12. Introduction to mathematical models and methods

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqi, A. H.; Manchanda, P. [Gautam Budha University, Gautam Budh Nagar-201310 (India); Department of Mathematics, Guru Nanak Dev University, Amritsar (India)

    2012-07-17

    Some well known mathematical models in the form of partial differential equations representing real world systems are introduced along with fundamental concepts of Image Processing. Notions such as seismic texture, seismic attributes, core data, well logging, seismic tomography and reservoirs simulation are discussed.

  13. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  14. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  15. Predicting Rehabilitation Success Rate Trends among Ethnic Minorities Served by State Vocational Rehabilitation Agencies: A National Time Series Forecast Model Demonstration Study

    Science.gov (United States)

    Moore, Corey L.; Wang, Ningning; Washington, Janique Tynez

    2017-01-01

    Purpose: This study assessed and demonstrated the efficacy of two select empirical forecast models (i.e., autoregressive integrated moving average [ARIMA] model vs. grey model [GM]) in accurately predicting state vocational rehabilitation agency (SVRA) rehabilitation success rate trends across six different racial and ethnic population cohorts…

  16. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    layer non-reflecting boundary condition (NRBC) on the right wall of the model. A NRBC is when an artificial boundary , B, is created, which truncates the... applications ,” Journal of Computational Physics, 2004. [30] P. L. Butzer and R. Weis, “On the lax equivalence theorem equipped with orders,” Journal of...closer to the shoreline. In our simulation, we also learned of the effects spurious waves can have on the results. Due to boundary conditions, a

  17. A modeling method of semiconductor fabrication flows with extended knowledge hybrid Petri nets

    Institute of Scientific and Technical Information of China (English)

    Zhou Binghai; Jiang Shuyu; Wang Shijin; Wu bin

    2008-01-01

    A modeling method of extended knowledge hybrid Petri nets (EKHPNs), incorporating object-oriented methods into hybrid Petri nets (HPNs), was presented and used for the representation and modeling of semiconductor wafer fabrication flows. To model the discrete and continuous parts of a complex semiconductor wafer fabrication flow, the HPNs were introduced into the EKHPNs. Object-oriented methods were combined into the EKHPNs for coping with the complexity of the fabrication flow. Knowledge annotations were introduced to solve input and output conflicts of the EKHPNs.Finally, to demonstrate the validity of the EKHPN method, a real semiconductor wafer fabrication case was used to illustrate the modeling procedure. The modeling results indicate that the proposed method can be used to model a complex semiconductor wafer fabrication flow expediently.

  18. Strategy Guideline: Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Savage, C.; Hunt, A.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  19. Strategy Guideline. Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, A.; Savage, C.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  20. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  1. Continual integration method in the polaron model

    International Nuclear Information System (INIS)

    Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.

    1981-01-01

    The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru

  2. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  3. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  4. Quantification of histochemical stains using whole slide imaging: development of a method and demonstration of its usefulness in laboratory quality control.

    Science.gov (United States)

    Gray, Allan; Wright, Alex; Jackson, Pete; Hale, Mike; Treanor, Darren

    2015-03-01

    Histochemical staining of tissue is a fundamental technique in tissue diagnosis and research, but it suffers from significant variability. Efforts to address this include laboratory quality controls and quality assurance schemes, but these rely on subjective interpretation of stain quality, are laborious and have low reproducibility. We aimed (1) to develop a method for histochemical stain quantification using whole slide imaging and image analysis and (2) to demonstrate its usefulness in measuring staining variation. A method to quantify the individual stain components of histochemical stains on virtual slides was developed. It was evaluated for repeatability and reproducibility, then applied to control sections of an appendix to quantify H&E staining (H/E intensities and H:E ratio) between automated staining machines and to measure differences between six regional diagnostic laboratories. The method was validated with laboratories from 0.57 to 0.89. A simple method using whole slide imaging can be used to quantify and compare histochemical staining. This method could be deployed in routine quality assurance and quality control. Work is needed on whole slide imaging devices to improve reproducibility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Business process modeling for the Virginia Department of Transportation : a demonstration with the integrated six-year improvement program and the statewide transportation improvement program.

    Science.gov (United States)

    2005-01-01

    This effort demonstrates business process modeling to describe the integration of particular planning and programming activities of a state highway agency. The motivations to document planning and programming activities are that: (i) resources for co...

  6. Business process modeling for the Virginia Department of Transportation : a demonstration with the integrated six-year improvement program and the statewide transportation improvement program : executive summary.

    Science.gov (United States)

    2005-01-01

    This effort demonstrates business process modeling to describe the integration of particular planning and programming activities of a state highway agency. The motivations to document planning and programming activities are that: (i) resources for co...

  7. Development and Modeling of Angled Effusion Cooling for the BR715 Low Emission Staged Combustor Core Demonstrator

    National Research Council Canada - National Science Library

    Gerendas, M

    2003-01-01

    .... The combustor cooling concept chosen was of the angled effusion type. Development of adequate modeling techniques and steady-state and transient rig tests to calibrate the thermal models was the key factor for the success...

  8. Using a CBL Unit, a Temperature Sensor, and a Graphing Calculator to Model the Kinetics of Consecutive First-Order Reactions as Safe In-Class Demonstrations

    Science.gov (United States)

    Moore-Russo, Deborah A.; Cortes-Figueroa, Jose E.; Schuman, Michael J.

    2006-01-01

    The use of Calculator-Based Laboratory (CBL) technology, the graphing calculator, and the cooling and heating of water to model the behavior of consecutive first-order reactions is presented, where B is the reactant, I is the intermediate, and P is the product for an in-class demonstration. The activity demonstrates the spontaneous and consecutive…

  9. Data Modeling, Feature Extraction, and Classification of Magnetic and EMI Data, ESTCP Discrimination Study, Camp Sibert, AL. Demonstration Report

    Science.gov (United States)

    2008-09-01

    Figure 19. Misfit versus depth curve for the EM63 Pasion -Oldenburg model fit to anomaly 649. Two cases are considered: (i) using all the data which...selection of optimal models; c) Fitting of 2- and 3-dipole Pasion -Oldenburg models to the EM63 cued- interrogation data and selection of optimal models...Hart et al., 2001; Collins et al., 2001; Pasion & Oldenburg, 2001; Zhang et al., 2003a, 2003b; Billings, 2004). The most promising discrimination

  10. Development and Field Testing of a Model to Simulate a Demonstration of Le Chatelier's Principle Using the Wheatstone Bridge Circuit.

    Science.gov (United States)

    Vickner, Edward Henry, Jr.

    An electronic simulation model was designed, constructed, and then field tested to determine student opinion of its effectiveness as an instructional aid. The model was designated as the Equilibrium System Simulator (ESS). The model was built on the principle of electrical symmetry applied to the Wheatstone bridge and was constructed from readily…

  11. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...

  12. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  13. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  14. Models and methods of emotional concordance.

    Science.gov (United States)

    Hollenstein, Tom; Lanteigne, Dianna

    2014-04-01

    Theories of emotion generally posit the synchronized, coordinated, and/or emergent combination of psychophysiological, cognitive, and behavioral components of the emotion system--emotional concordance--as a functional definition of emotion. However, the empirical support for this claim has been weak or inconsistent. As an introduction to this special issue on emotional concordance, we consider three domains of explanations as to why this theory-data gap might exist. First, theory may need to be revised to more accurately reflect past research. Second, there may be moderating factors such as emotion regulation, context, or individual differences that have obscured concordance. Finally, the methods typically used to test theory may be inadequate. In particular, we review a variety of potential issues: intensity of emotions elicited in the laboratory, nonlinearity, between- versus within-subject associations, the relative timing of components, bivariate versus multivariate approaches, and diversity of physiological processes. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  16. Application of the dual reciprocity boundary element method for numerical modelling of solidification process

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2008-12-01

    Full Text Available The dual reciprocity boundary element method is applied for numerical modelling of solidification process. This variant of the BEM is connected with the transformation of the domain integral to the boundary integrals. In the paper the details of the dual reciprocity boundary element method are presented and the usefulness of this approach to solidification process modelling is demonstrated. In the final part of the paper the examples of computations are shown.

  17. METHODICAL MODEL FOR TEACHING BASIC SKI TURN

    Directory of Open Access Journals (Sweden)

    Danijela Kuna

    2013-07-01

    Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.

  18. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    Science.gov (United States)

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  19. The Bruton Tyrosine Kinase (BTK) Inhibitor Acalabrutinib Demonstrates Potent On-Target Effects and Efficacy in Two Mouse Models of Chronic Lymphocytic Leukemia

    DEFF Research Database (Denmark)

    Herman, Sarah E M; Montraveta, Arnau; Niemann, Carsten U

    2017-01-01

    into the drinking water.Results: Utilizing biochemical assays, we demonstrate that acalabrutinib is a highly selective BTK inhibitor as compared with ibrutinib. In the human CLL NSG xenograft model, treatment with acalabrutinib demonstrated on-target effects, including decreased phosphorylation of PLCγ2, ERK......). In two complementary mouse models of CLL, acalabrutinib significantly reduced tumor burden and increased survival compared with vehicle treatment. Overall, acalabrutinib showed increased BTK selectivity compared with ibrutinib while demonstrating significant antitumor efficacy in vivo on par...... with ibrutinib. Clin Cancer Res; 23(11); 2831-41. ©2016 AACR....

  20. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  1. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  2. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  3. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  4. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  5. Development of a membrane-assisted fluidized bed reactor - 2 - Experimental demonstration and modeling for the partial oxidation of methanol

    NARCIS (Netherlands)

    Deshmukh, S.A.R.K.; Laverman, J.A.; van Sint Annaland, M.; Kuipers, J.A.M.

    2005-01-01

    A small laboratory-scale membrane-assisted fluidized bed reactor (MAFBR) was constructed in order to experimentally demonstrate the reactor concept for the partial oxidation of methanol to formaldehyde. Methanol conversion and product selectivities were measured at various overall fluidization

  6. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  7. Background field method for nonlinear σ-model in stochastic quantization

    International Nuclear Information System (INIS)

    Nakazawa, Naohito; Ennyu, Daiji

    1988-01-01

    We formulate the background field method for the nonlinear σ-model in stochastic quantization. We demonstrate a one-loop calculation for a two-dimensional non-linear σ-model on a general riemannian manifold based on our formulation. The formulation is consistent with the known results in ordinary quantization. As a simple application, we also analyse the multiplicative renormalization of the O(N) nonlinear σ-model. (orig.)

  8. SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tetiana D. Panchenko

    2016-07-01

    Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.

  9. Modifying conjoint methods to model managers' reactions to business environmental trends : an application to modeling retailer reactions to sales trends

    NARCIS (Netherlands)

    Oppewal, H.; Louviere, J.J.; Timmermans, H.J.P.

    2000-01-01

    This article proposes and demonstrates how conjoint methods can be adapted to allow the modeling of managerial reactions to various changes in economic and competitive environments and their effects on observed sales levels. Because in general micro-level data on strategic decision making over time

  10. A gas radiation property model applicable to general combustion CFD and its demonstration in oxy-fuel combustion simulation

    DEFF Research Database (Denmark)

    Yin, Chungen; Singh, Shashank; Romero, Sergio Sanchez

    2017-01-01

    As a good compromise between computational efficiency and accuracy, the weighted-sum-of-gray-gases model (WSGGM) is often used in computational fluid dynamics (CFD) modeling of combustion processes for evaluating gas radiative properties. However, the WSGGMs still have practical limitations (e...

  11. Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle

    2002-01-01

    In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....

  12. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  13. Quantitative aspects of the cytochemical demonstration of glucose-6-phosphate dehydrogenase with tetrazolium salts studied in a model system of polyacrylamide films

    NARCIS (Netherlands)

    van Noorden, C. J.; Tas, J.; Sanders, J. A.

    1981-01-01

    The enzyme cytochemical demonstration of glucose-6-phosphate dehydrogenase (G6PDH) with several tetrazolium salts has been studied with an artificial model of polyacrylamide films in corporated with the enzyme, which enabled teh correlation of cytochemical and biochemical data. In the model films no

  14. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  15. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  16. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  17. Demonstration of different endocervical staining methods and their usefulness in the diagnosis of the chlamydial infection in exfoliated cells advantages and disadvantages.

    Science.gov (United States)

    Mahmutović, Sabina; Beslagić, Edina; Hamzić, Sadeta; Aljicević, Mufida

    2004-02-01

    Microscopic demonstration of chlamydial inclusions within cells offered the first laboratory procedure supporting the clinical diagnosis of chlamydial infection. Our aim is to evaluate the usefulness of different endocervical staining methods in diagnosis of Chlamydia trachomatis (CT) infection within exfoliated cells of the endocervix. The cytological test for the detection of chlamydial inclusions in genital tract infection, though not as sensitive and specific as isolation in the cell culture monolayers, is still of the diagnostic value. The present study discusses the collection of clinical smears for microscopic examination, their preparation; fixation and staining of slides by a variety of staining methods that have been used to detect Chlamydia in clinical smears and biopsies. Most of these methods such as Giemsa stain, Papanicolaou, iodine, and immunofluorescence (IF) using monoclonal antibodies, are based on the combination of dyes designed to obtain optimum differentiation of the various structures. The utilization of different endocervical smear stains together with the clinical information can be used to identify women at high risk for CT infection.

  18. Nonperturbative stochastic method for driven spin-boson model

    Science.gov (United States)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  19. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  20. Modelling a coal subcrop using the impedance method

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering

    2000-07-01

    An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.

  1. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  2. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  3. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  4. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  5. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...

  6. Solving the nuclear shell model with an algebraic method

    International Nuclear Information System (INIS)

    Feng, D.H.; Pan, X.W.; Guidry, M.

    1997-01-01

    We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)

  7. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  8. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  9. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  10. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  11. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  12. Demonstration of GaAsSb/InAs nanowire backward diodes grown using position-controlled vapor-liquid-solid method

    Science.gov (United States)

    Kawaguchi, Kenichi; Takahashi, Tsuyoshi; Okamoto, Naoya; Sato, Masaru

    2018-02-01

    p-GaAsSb/n-InAs type-II nanowire (NW) diodes were fabricated using the position-controlled vapor-liquid-solid growth method. InAs and GaAsSb NW segments were grown vertically on GaAs(111)B substrates with the assistance of Au catalysts. Transmission electron microscopy-energy-dispersive X-ray spectroscopy analysis revealed that the GaAsSb segments have an Sb content of 40%, which is sufficient to form a tunnel heterostructure. Scanning capacitance microscope images clearly indicated the formation of a p-n junction in the NWs. Backward diode characteristics, that is, current flow toward negative bias originating from a tunnel current and current suppression toward positive bias by a heterobarrier, were demonstrated.

  13. Implementation of the k0-standardization Method for an Instrumental Neutron Activation Analysis: Use-k0-IAEA Software as a Demonstration

    International Nuclear Information System (INIS)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Kim, Hark Rho; Ho, Manh Dung

    2006-03-01

    Under the RCA post-doctoral program, from May 2005 through February 2006, it was an opportunity to review the present work being carried out in the Neutron Activation Analysis Laboratory, HANARO Center, KAERI. The scope of this research included: a calibration of the counting system, a characterization of the irradiation facility ,a validation of the established k o -NAA procedure.The k o -standardization method for an Neutron Activation Analysis(k o -NAA), which is becoming increasingly popular and widespread,is an absolute calibration technique where the nuclear data are replaced by compound nuclear constants which are experimentally determined. The k o -IAEA software distributed by the IAEA in 2005 was used as a demonstration for this work. The NAA no. 3 irradiation hole in the HANARO research reactor and the gamma-ray spectrometers No. 1 and 5 in the NAA Laboratory were used

  14. Implementation of the k{sub 0}-standardization Method for an Instrumental Neutron Activation Analysis: Use-k{sub 0}-IAEA Software as a Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Kim, Hark Rho [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Ho, Manh Dung [Nuclear Research Institute, Dalat (Viet Nam)

    2006-03-15

    Under the RCA post-doctoral program, from May 2005 through February 2006, it was an opportunity to review the present work being carried out in the Neutron Activation Analysis Laboratory, HANARO Center, KAERI. The scope of this research included: a calibration of the counting system, a characterization of the irradiation facility ,a validation of the established k{sub o}-NAA procedure.The k{sub o}-standardization method for an Neutron Activation Analysis(k{sub o}-NAA), which is becoming increasingly popular and widespread,is an absolute calibration technique where the nuclear data are replaced by compound nuclear constants which are experimentally determined. The k{sub o}-IAEA software distributed by the IAEA in 2005 was used as a demonstration for this work. The NAA no. 3 irradiation hole in the HANARO research reactor and the gamma-ray spectrometers No. 1 and 5 in the NAA Laboratory were used.

  15. a Tool for Crowdsourced Building Information Modeling Through Low-Cost Range Camera: Preliminary Demonstration and Potential

    Science.gov (United States)

    Capocchiano, F.; Ravanelli, R.; Crespi, M.

    2017-11-01

    Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.

  16. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  17. Sensitivities of crop models to extreme weather conditions during flowering period demonstrated for maize and winter wheat in Austria

    Czech Academy of Sciences Publication Activity Database

    Eitzinger, Josef; Thaler, S.; Schmid, E.; Strauss, F.; Ferrise, R.; Moriondo, M.; Bindi, M.; Palosuo, T.; Rötter, R.; Kersebaum, K. C.; Olesen, J. E.; Patil, R. H.; Saylan, L.; Çaldag, B.; Caylak, O.

    2013-01-01

    Roč. 151, č. 6 (2013), s. 813-835 ISSN 0021-8596 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : crop models * weather conditions * winter wheat * Austria Subject RIV: EH - Ecology, Behaviour Impact factor: 2.891, year: 2013

  18. Explicitly represented polygon wall boundary model for the explicit MPS method

    Science.gov (United States)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  19. Dependence of QSAR models on the selection of trial descriptor sets: a demonstration using nanotoxicity endpoints of decorated nanotubes.

    Science.gov (United States)

    Shao, Chi-Yu; Chen, Sing-Zuo; Su, Bo-Han; Tseng, Yufeng J; Esposito, Emilio Xavier; Hopfinger, Anton J

    2013-01-28

    Little attention has been given to the selection of trial descriptor sets when designing a QSAR analysis even though a great number of descriptor classes, and often a greater number of descriptors within a given class, are now available. This paper reports an effort to explore interrelationships between QSAR models and descriptor sets. Zhou and co-workers (Zhou et al., Nano Lett. 2008, 8 (3), 859-865) designed, synthesized, and tested a combinatorial library of 80 surface modified, that is decorated, multi-walled carbon nanotubes for their composite nanotoxicity using six endpoints all based on a common 0 to 100 activity scale. Each of the six endpoints for the 29 most nanotoxic decorated nanotubes were incorporated as the training set for this study. The study reported here includes trial descriptor sets for all possible combinations of MOE, VolSurf, and 4D-fingerprints (FP) descriptor classes, as well as including and excluding explicit spatial contributions from the nanotube. Optimized QSAR models were constructed from these multiple trial descriptor sets. It was found that (a) both the form and quality of the best QSAR models for each of the endpoints are distinct and (b) some endpoints are quite dependent upon 4D-FP descriptors of the entire nanotube-decorator complex. However, other endpoints yielded equally good models only using decorator descriptors with and without the decorator-only 4D-FP descriptors. Lastly, and most importantly, the quality, significance, and interpretation of a QSAR model were found to be critically dependent on the trial descriptor sets used within a given QSAR endpoint study.

  20. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  1. Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence

    CERN Document Server

    Hutter, Kolumban

    2004-01-01

    The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.

  2. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  3. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  4. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  5. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  6. A rapid method of accurate detection and differentiation of Newcastle disease virus pathotypes by demonstrating multiple bands in degenerate primer based nested RT-PCR.

    Science.gov (United States)

    Desingu, P A; Singh, S D; Dhama, K; Kumar, O R Vinodh; Singh, R; Singh, R K

    2015-02-01

    A rapid and accurate method of detection and differentiation of virulent and avirulent Newcastle disease virus (NDV) pathotypes was developed. The NDV detection was carried out for different domestic avian field isolates and pigeon paramyxo virus-1 (25 field isolates and 9 vaccine strains) by using APMV-I "fusion" (F) gene Class II specific external primer A and B (535bp), internal primer C and D (238bp) based reverses transcriptase PCR (RT-PCR). The internal degenerative reverse primer D is specific for F gene cleavage position of virulent strain of NDV. The nested RT-PCR products of avirulent strains showed two bands (535bp and 424bp) while virulent strains showed four bands (535bp, 424bp, 349bp and 238bp) on agar gel electrophoresis. This is the first report regarding development and use of degenerate primer based nested RT-PCR for accurate detection and differentiation of NDV pathotypes by demonstrating multiple PCR band patterns. Being a rapid, simple, and economical test, the developed method could serve as a valuable alternate diagnostic tool for characterizing NDV isolates and carrying out molecular epidemiological surveillance studies for this important pathogen of poultry. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  8. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  9. Method for modeling social care processes for national information exchange.

    Science.gov (United States)

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  10. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  11. [A new method of fabricating photoelastic model by rapid prototyping].

    Science.gov (United States)

    Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei

    2011-10-01

    To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.

  12. Normal and Fibrotic Rat Livers Demonstrate Shear Strain Softening and Compression Stiffening: A Model for Soft Tissue Mechanics.

    Directory of Open Access Journals (Sweden)

    Maryna Perepelyuk

    Full Text Available Tissues including liver stiffen and acquire more extracellular matrix with fibrosis. The relationship between matrix content and stiffness, however, is non-linear, and stiffness is only one component of tissue mechanics. The mechanical response of tissues such as liver to physiological stresses is not well described, and models of tissue mechanics are limited. To better understand the mechanics of the normal and fibrotic rat liver, we carried out a series of studies using parallel plate rheometry, measuring the response to compressive, extensional, and shear strains. We found that the shear storage and loss moduli G' and G" and the apparent Young's moduli measured by uniaxial strain orthogonal to the shear direction increased markedly with both progressive fibrosis and increasing compression, that livers shear strain softened, and that significant increases in shear modulus with compressional stress occurred within a range consistent with increased sinusoidal pressures in liver disease. Proteoglycan content and integrin-matrix interactions were significant determinants of liver mechanics, particularly in compression. We propose a new non-linear constitutive model of the liver. A key feature of this model is that, while it assumes overall liver incompressibility, it takes into account water flow and solid phase compressibility. In sum, we report a detailed study of non-linear liver mechanics under physiological strains in the normal state, early fibrosis, and late fibrosis. We propose a constitutive model that captures compression stiffening, tension softening, and shear softening, and can be understood in terms of the cellular and matrix components of the liver.

  13. High-resolution modeling of the western North American power system demonstrates low-cost and low-carbon futures

    International Nuclear Information System (INIS)

    Nelson, James; Johnston, Josiah; Mileva, Ana; Fripp, Matthias; Hoffman, Ian; Petros-Good, Autumn; Blanco, Christian; Kammen, Daniel M.

    2012-01-01

    Decarbonizing electricity production is central to reducing greenhouse gas emissions. Exploiting intermittent renewable energy resources demands power system planning models with high temporal and spatial resolution. We use a mixed-integer linear programming model – SWITCH – to analyze least-cost generation, storage, and transmission capacity expansion for western North America under various policy and cost scenarios. Current renewable portfolio standards are shown to be insufficient to meet emission reduction targets by 2030 without new policy. With stronger carbon policy consistent with a 450 ppm climate stabilization scenario, power sector emissions can be reduced to 54% of 1990 levels by 2030 using different portfolios of existing generation technologies. Under a range of resource cost scenarios, most coal power plants would be replaced by solar, wind, gas, and/or nuclear generation, with intermittent renewable sources providing at least 17% and as much as 29% of total power by 2030. The carbon price to induce these deep carbon emission reductions is high, but, assuming carbon price revenues are reinvested in the power sector, the cost of power is found to increase by at most 20% relative to business-as-usual projections. - Highlights: ► Intermittent generation necessitates high-resolution electric power system models. ► We apply the SWITCH planning model to the western North American grid. ► We explore carbon policy and resource cost scenarios through 2030. ► As the carbon price rises, coal generation is replaced with solar, wind, gas and/or nuclear generation ► A 450 ppm climate stabilization target can be met at a 20% or lower cost increase.

  14. A case study of bats and white-nose syndrome demonstrating how to model population viability with evolutionary effects.

    Science.gov (United States)

    Maslo, Brooke; Fefferman, Nina H

    2015-08-01

    Ecological factors generally affect population viability on rapid time scales. Traditional population viability analyses (PVA) therefore focus on alleviating ecological pressures, discounting potential evolutionary impacts on individual phenotypes. Recent studies of evolutionary rescue (ER) focus on cases in which severe, environmentally induced population bottlenecks trigger a rapid evolutionary response that can potentially reverse demographic threats. ER models have focused on shifting genetics and resulting population recovery, but no one has explored how to incorporate those findings into PVA. We integrated ER into PVA to identify the critical decision interval for evolutionary rescue (DIER) under which targeted conservation action should be applied to buffer populations undergoing ER against extinction from stochastic events and to determine the most appropriate vital rate to target to promote population recovery. We applied this model to little brown bats (Myotis lucifugus) affected by white-nose syndrome (WNS), a fungal disease causing massive declines in several North American bat populations. Under the ER scenario, the model predicted that the DIER period for little brown bats was within 11 years of initial WNS emergence, after which they stabilized at a positive growth rate (λ = 1.05). By comparing our model results with population trajectories of multiple infected hibernacula across the WNS range, we concluded that ER is a potential explanation of observed little brown bat population trajectories across multiple hibernacula within the affected range. Our approach provides a tool that can be used by all managers to provide testable hypotheses regarding the occurrence of ER in declining populations, suggest empirical studies to better parameterize the population genetics and conservation-relevant vital rates, and identify the DIER period during which management strategies will be most effective for species conservation. © 2015 Society for Conservation

  15. Demonstration of a Model-Based Technology for Monitoring Water Quality and Corrosion in Water-Distribution systems

    Science.gov (United States)

    2016-12-01

    that Fort Drum uses water from two sources: (1) treated groundwater from its on-post wells and (2) treated surface water supplied by the Development...Complete replacement of distribution system piping $21 million Year 10 and Year 30 Leak repair $40,000 Annual Bottled water for drinking $20,000 per...about effects of the instal- lation’s dual water supplies on operation of the water -distribution system. 5.2 Recommendations 5.2.1 Applicability Model

  16. An in silico agent-based model demonstrates Reelin function in directing lamination of neurons during cortical development.

    Science.gov (United States)

    Caffrey, James R; Hughes, Barry D; Britto, Joanne M; Landman, Kerry A

    2014-01-01

    The characteristic six-layered appearance of the neocortex arises from the correct positioning of pyramidal neurons during development and alterations in this process can cause intellectual disabilities and developmental delay. Malformations in cortical development arise when neurons either fail to migrate properly from the germinal zones or fail to cease migration in the correct laminar position within the cortical plate. The Reelin signalling pathway is vital for correct neuronal positioning as loss of Reelin leads to a partially inverted cortex. The precise biological function of Reelin remains controversial and debate surrounds its role as a chemoattractant or stop signal for migrating neurons. To investigate this further we developed an in silico agent-based model of cortical layer formation. Using this model we tested four biologically plausible hypotheses for neuron motility and four biologically plausible hypotheses for the loss of neuron motility (conversion from migration). A matrix of 16 combinations of motility and conversion rules was applied against the known structure of mouse cortical layers in the wild-type cortex, the Reelin-null mutant, the Dab1-null mutant and a conditional Dab1 mutant. Using this approach, many combinations of motility and conversion mechanisms can be rejected. For example, the model does not support Reelin acting as a repelling or as a stopping signal. In contrast, the study lends very strong support to the notion that the glycoprotein Reelin acts as a chemoattractant for neurons. Furthermore, the most viable proposition for the conversion mechanism is one in which conversion is affected by a motile neuron sensing in the near vicinity neurons that have already converted. Therefore, this model helps elucidate the function of Reelin during neuronal migration and cortical development.

  17. Dabigatran – an exemplar case history demonstrating the need for comprehensive models to optimise the use of new drugs

    Directory of Open Access Journals (Sweden)

    Brian eGodman

    2014-06-01

    Full Text Available Background: There are potential conflicts between authorities and companies to fund new premium priced drugs especially where there are effectiveness, safety and/ or budget concerns. Dabigatran, a new oral anticoagulant for the prevention of stroke in patients with non-valvular atrial fibrillation (AF, exemplifies this issue. Whilst new effective treatments are needed, there are issues in the elderly with dabigatran due to variable drug concentrations, no known antidote and dependence on renal elimination. Published studies showed dabigatran to be cost-effective but there are budget concerns given the prevalence of AF. These concerns resulted in extensive activities pre- to post-launch to manage its introduction. Objective: To (i review authority activities across countries, (ii use the findings to develop new models to better manage the entry of new drugs, and (iii review the implications based on post-launch activities. Methodology: (i Descriptive review and appraisal of activities regarding dabigatran, (ii development of guidance for key stakeholder groups through an iterative process, (iii refining guidance following post launch studies. Results: Plethora of activities to manage dabigatran including extensive pre-launch activities, risk sharing arrangements, prescribing restrictions and monitoring of prescribing post launch. Reimbursement has been denied in some countries due to concerns with its budget impact and/or excessive bleeding. Development of a new model and future guidance is proposed to better manage the entry of new drugs, centring on three pillars of pre-, peri- and post-launch activities. Post-launch activities include increasing use of patient registries to monitor the safety and effectiveness of new drugs in clinical practice. Conclusion: Models for introducing new drugs are essential to optimise their prescribing especially where concerns. Without such models, new drugs may be withdrawn prematurely and/ or struggle for

  18. An in silico agent-based model demonstrates Reelin function in directing lamination of neurons during cortical development.

    Directory of Open Access Journals (Sweden)

    James R Caffrey

    Full Text Available The characteristic six-layered appearance of the neocortex arises from the correct positioning of pyramidal neurons during development and alterations in this process can cause intellectual disabilities and developmental delay. Malformations in cortical development arise when neurons either fail to migrate properly from the germinal zones or fail to cease migration in the correct laminar position within the cortical plate. The Reelin signalling pathway is vital for correct neuronal positioning as loss of Reelin leads to a partially inverted cortex. The precise biological function of Reelin remains controversial and debate surrounds its role as a chemoattractant or stop signal for migrating neurons. To investigate this further we developed an in silico agent-based model of cortical layer formation. Using this model we tested four biologically plausible hypotheses for neuron motility and four biologically plausible hypotheses for the loss of neuron motility (conversion from migration. A matrix of 16 combinations of motility and conversion rules was applied against the known structure of mouse cortical layers in the wild-type cortex, the Reelin-null mutant, the Dab1-null mutant and a conditional Dab1 mutant. Using this approach, many combinations of motility and conversion mechanisms can be rejected. For example, the model does not support Reelin acting as a repelling or as a stopping signal. In contrast, the study lends very strong support to the notion that the glycoprotein Reelin acts as a chemoattractant for neurons. Furthermore, the most viable proposition for the conversion mechanism is one in which conversion is affected by a motile neuron sensing in the near vicinity neurons that have already converted. Therefore, this model helps elucidate the function of Reelin during neuronal migration and cortical development.

  19. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  20. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  1. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  2. RDandD Programme 2010. Programme for research, development and demonstration of methods for the management and disposal of nuclear waste; Fud-program 2010. Program foer forskning, utveckling och demonstration av metoder foer hantering och slutfoervaring av kaernavfall

    Energy Technology Data Exchange (ETDEWEB)

    2010-09-15

    The RD and D programme 2010 gives an account of SKB's plans for research, development and demonstration during the period 2011-2016. SKB's activities are divided into two main areas - the programme for Low and Intermediate Level Waste (the Loma program) and the Nuclear Fuel Program. The RD and D Programme 2010 consists of five parts: Part I: Overall Plan, Part II: Loma program, Part III: Nuclear Fuel Program, Part IV: Research on analysis of long-term safety, Part V: Social Science Research. The 2007 RD and D programme was focused primarily on technology development to realize the final repository for spent nuclear fuel. The actions described were aimed at increasing awareness of long-term safety and to obtain technical data for application under the Nuclear Activities Act for the final repository for spent fuel and under the Environmental Code of the repository system. Many important results from these efforts are reported in this program. An overall account of the results will be given in the Licensing application in early 2011. The authorities' review of RD and D programme in 2007 and completion of the program called for clarification of plans and programs for the final repository for short-lived radioactive waste, SFR, and the final repository for waste, SFL. This RD and D program describes these plans in a more detailed way

  3. A qualitative model construction method of nuclear power plants for effective diagnostic knowledge generation

    International Nuclear Information System (INIS)

    Yoshikawa, Shinji; Endou, Akira; Kitamura, Yoshinobu; Sasajima, Munehiko; Ikeda, Mitsuru; Mizoguchi, Riichiro.

    1994-01-01

    This paper discusses a method to construct a qualitative model of a nuclear power plant, in order to generate effective diagnostic knowledge. The proposed method is to prepare deep knowledge to be provided to a knowledge compiler based upon qualitative reasoning (QR). Necessity of knowledge compilation for nuclear plant diagnosis will be explained first, and conventionally-experienced problems in qualitative reasoning and a proposed method to overcome this problem is shown next, then a sample procedure to build a qualitative nuclear plant model is demonstrated. (author)

  4. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  5. PBTK modeling demonstrates contribution of dermal and inhalation exposure components to end-exhaled breath concentrations of naphthalene.

    Science.gov (United States)

    Kim, David; Andersen, Melvin E; Chao, Yi-Chun E; Egeghy, Peter P; Rappaport, Stephen M; Nylander-French, Leena A

    2007-06-01

    Dermal and inhalation exposure to jet propulsion fuel 8 (JP-8) have been measured in a few occupational exposure studies. However, a quantitative understanding of the relationship between external exposures and end-exhaled air concentrations has not been described for occupational and environmental exposure scenarios. Our goal was to construct a physiologically based toxicokinetic (PBTK) model that quantitatively describes the relative contribution of dermal and inhalation exposures to the end-exhaled air concentrations of naphthalene among U.S. Air Force personnel. The PBTK model comprised five compartments representing the stratum corneum, viable epidermis, blood, fat, and other tissues. The parameters were optimized using exclusively human exposure and biological monitoring data. The optimized values of parameters for naphthalene were a) permeability coefficient for the stratum corneum 6.8 x 10(-5) cm/hr, b) permeability coefficient for the viable epidermis 3.0 x 10(-3) cm/hr, c) fat:blood partition coefficient 25.6, and d) other tissue:blood partition coefficient 5.2. The skin permeability coefficient was comparable to the values estimated from in vitro studies. Based on simulations of workers' exposures to JP-8 during aircraft fuel-cell maintenance operations, the median relative contribution of dermal exposure to the end-exhaled breath concentration of naphthalene was 4% (10th percentile 1% and 90th percentile 11%). PBTK modeling allowed contributions of the end-exhaled air concentration of naphthalene to be partitioned between dermal and inhalation routes of exposure. Further study of inter- and intraindividual variations in exposure assessment is required to better characterize the toxicokinetic behavior of JP-8 components after occupational and/or environmental exposures.

  6. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  7. Chemical changes demonstrated in cartilage by synchrotron infrared microspectroscopy in an antibody-induced murine model of rheumatoid arthritis

    Science.gov (United States)

    Croxford, Allyson M.; Selva Nandakumar, Kutty; Holmdahl, Rikard; Tobin, Mark J.; McNaughton, Don; Rowley, Merrill J.

    2011-06-01

    Collagen antibody-induced arthritis develops in mice following passive transfer of monoclonal antibodies (mAbs) to type II collagen (CII) and is attributed to effects of proinflammatory immune complexes, but transferred mAbs may react directly and damagingly with CII. To determine whether such mAbs cause cartilage damage in vivo in the absence of inflammation, mice lacking complement factor 5 that do not develop joint inflammation were injected intravenously with two arthritogenic mAbs to CII, M2139 and CIIC1. Paws were collected at day 3, decalcified, paraffin embedded, and 5-μm sections were examined using standard histology and synchrotron Fourier-transform infrared microspectroscopy (FTIRM). None of the mice injected with mAb showed visual or histological evidence of inflammation but there were histological changes in the articular cartilage including loss of proteoglycan and altered chondrocyte morphology. Findings using FTIRM at high lateral resolution revealed loss of collagen and the appearance of a new peak at 1635 cm-1 at the surface of the cartilage interpreted as cellular activation. Thus, we demonstrate the utility of synchrotron FTIRM for examining chemical changes in diseased cartilage at the microscopic level and establish that arthritogenic mAbs to CII do cause cartilage damage in vivo in the absence of inflammation.

  8. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  9. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.

  10. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal; Hadwiger, Markus

    2016-01-01

    processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling

  11. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  12. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  13. Rapid and effective decontamination of chlorophenol-contaminated soil by sorption into commercial polymers: concept demonstration and process modeling.

    Science.gov (United States)

    Tomei, M Concetta; Mosca Angelucci, Domenica; Ademollo, Nicoletta; Daugulis, Andrew J

    2015-03-01

    Solid phase extraction performed with commercial polymer beads to treat soil contaminated by chlorophenols (4-chlorophenol, 2,4-dichlorophenol and pentachlorophenol) as single compounds and in a mixture has been investigated in this study. Soil-water-polymer partition tests were conducted to determine the relative affinities of single compounds in soil-water and polymer-water pairs. Subsequent soil extraction tests were performed with Hytrel 8206, the polymer showing the highest affinity for the tested chlorophenols. Factors that were examined were polymer type, moisture content, and contamination level. Increased moisture content (up to 100%) improved the extraction efficiency for all three compounds. Extraction tests at this upper level of moisture content showed removal efficiencies ≥70% for all the compounds and their ternary mixture, for 24 h of contact time, which is in contrast to the weeks and months, normally required for conventional ex situ remediation processes. A dynamic model characterizing the rate and extent of decontamination was also formulated, calibrated and validated with the experimental data. The proposed model, based on the simplified approach of "lumped parameters" for the mass transfer coefficients, provided very good predictions of the experimental data for the absorptive removal of contaminants from soil at different individual solute levels. Parameters evaluated from calibration by fitting of single compound data, have been successfully applied to predict mixture data, with differences between experimental and predicted data in all cases being ≤3%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Prototype Demonstration of Gamma- Blind Tensioned Metastable Fluid Neutron/Multiplicity/Alpha Detector – Real Time Methods for Advanced Fuel Cycle Applications

    Energy Technology Data Exchange (ETDEWEB)

    McDeavitt, Sean M. [Texas A & M Univ., College Station, TX (United States)

    2016-12-20

    The content of this report summarizes a multi-year effort to develop prototype detection equipment using the Tensioned Metastable Fluid Detector (TMFD) technology developed by Taleyarkhan [1]. The context of this development effort was to create new methods for evaluating and developing advanced methods for safeguarding nuclear materials along with instrumentation in various stages of the fuel cycle, especially in material balance areas (MBAs) and during reprocessing of used nuclear fuel. One of the challenges related to the implementation of any type of MBA and/or reprocessing technology (e.g., PUREX or UREX) is the real-time quantification and control of the transuranic (TRU) isotopes as they move through the process. Monitoring of higher actinides from their neutron emission (including multiplicity) and alpha signatures during transit in MBAs and in aqueous separations is a critical research area. By providing on-line real-time materials accountability, diversion of the materials becomes much more difficult. The Tensioned Metastable Fluid Detector (TMFD) is a transformational technology that is uniquely capable of both alpha and neutron spectroscopy while being “blind” to the intense gamma field that typically accompanies used fuel – simultaneously with the ability to provide multiplicity information as well [1-3]. The TMFD technology was proven (lab-scale) as part of a 2008 NERI-C program [1-7]. The bulk of this report describes the advancements and demonstrations made in TMFD technology. One final point to present before turning to the TMFD demonstrations is the context for discussing real-time monitoring of SNM. It is useful to review the spectrum of isotopes generated within nuclear fuel during reactor operations. Used nuclear fuel (UNF) from a light water reactor (LWR) contains fission products as well as TRU elements formed through neutron absorption/decay chains. The majority of the fission products are gamma and beta emitters and they represent the

  15. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  16. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi's measurement in a low Earth orbit

    Science.gov (United States)

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi; Koi, Tatsumi; Madejski, Greg; Mizuno, Tsunefumi; Ohno, Masanori; Saito, Shinya; Sato, Tamotsu; Wright, Dennis H.; Enoto, Teruaki; Fukazawa, Yasushi; Hayashi, Katsuhiro; Kataoka, Jun; Katsuta, Junichiro; Kawaharada, Madoka; Kobayashi, Shogo B.; Kokubun, Motohide; Laurent, Philippe; Lebrun, Francois; Limousin, Olivier; Maier, Daniel; Makishima, Kazuo; Mimura, Taketo; Miyake, Katsuma; Mori, Kunishiro; Murakami, Hiroaki; Nakamori, Takeshi; Nakano, Toshio; Nakazawa, Kazuhiro; Noda, Hirofumi; Ohta, Masayuki; Ozaki, Masanobu; Sato, Goro; Sato, Rie; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Tanaka, Takaaki; Tanaka, Yasuyuki; Terada, Yukikatsu; Uchiyama, Hideki; Uchiyama, Yasunobu; Watanabe, Shin; Yamaoka, Kazutaka; Yasuda, Tetsuya; Yatsu, Yoichi; Yuasa, Takayuki; Zoglauer, Andreas

    2018-05-01

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation of isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. The simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi4Ge3O12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.

  17. Development and Demonstration of a Modeling Framework for Assessing the Efficacy of Using Mine Water for Thermoelectric Power Generation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-03-01

    Thermoelectric power plants use large volumes of water for condenser cooling and other plant operations. Traditionally, this water has been withdrawn from the cleanest water available in streams and rivers. However, as demand for electrical power increases it places increasing demands on freshwater resources resulting in conflicts with other off stream water users. In July 2002, NETL and the Governor of Pennsylvania called for the use of water from abandoned mines to replace our reliance on the diminishing and sometimes over allocated surface water resource. In previous studies the National Mine Land Reclamation Center (NMLRC) at West Virginia University has demonstrated that mine water has the potential to reduce the capital cost of acquiring cooling water while at the same time improving the efficiency of the cooling process due to the constant water temperatures associated with deep mine discharges. The objectives of this project were to develop and demonstrate a user-friendly computer based design aid for assessing the costs, technical and regulatory aspects and potential environmental benefits for using mine water for thermoelectric generation. The framework provides a systematic process for evaluating the hydrologic, chemical, engineering and environmental factors to be considered in using mine water as an alternative to traditional freshwater supply. A field investigation and case study was conducted for the proposed 300 MW Beech Hollow Power Plant located in Champion, Pennsylvania. The field study based on previous research conducted by NMLRC identified mine water sources sufficient to reliably supply the 2-3,000gpm water supply requirement of Beech Hollow. A water collection, transportation and treatment system was designed around this facility. Using this case study a computer based design aid applicable to large industrial water users was developed utilizing water collection and handling principals derived in the field investigation and during previous

  18. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  19. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  20. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  1. Spatial and temporal changes in the structure of groundwater nitrate concentration time series (1935 1999) as demonstrated by autoregressive modelling

    Science.gov (United States)

    Jones, A. L.; Smart, P. L.

    2005-08-01

    Autoregressive modelling is used to investigate the internal structure of long-term (1935-1999) records of nitrate concentration for five karst springs in the Mendip Hills. There is a significant short term (1-2 months) positive autocorrelation at three of the five springs due to the availability of sufficient nitrate within the soil store to maintain concentrations in winter recharge for several months. The absence of short term (1-2 months) positive autocorrelation in the other two springs is due to the marked contrast in land use between the limestone and swallet parts of the catchment, rapid concentrated recharge from the latter causing short term switching in the dominant water source at the spring and thus fluctuating nitrate concentrations. Significant negative autocorrelation is evident at lags varying from 4 to 7 months through to 14-22 months for individual springs, with positive autocorrelation at 19-20 months at one site. This variable timing is explained by moderation of the exhaustion effect in the soil by groundwater storage, which gives longer residence times in large catchments and those with a dominance of diffuse flow. The lags derived from autoregressive modelling may therefore provide an indication of average groundwater residence times. Significant differences in the structure of the autocorrelation function for successive 10-year periods are evident at Cheddar Spring, and are explained by the effect the ploughing up of grasslands during the Second World War and increased fertiliser usage on available nitrogen in the soil store. This effect is moderated by the influence of summer temperatures on rates of mineralization, and of both summer and winter rainfall on the timing and magnitude of nitrate leaching. The pattern of nitrate leaching also appears to have been perturbed by the 1976 drought.

  2. Two updating methods for dissipative models with non symmetric matrices

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Aubry, D.

    1997-01-01

    In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)

  3. New method for studying the microscopic foundations of the interacting boson model

    International Nuclear Information System (INIS)

    Klein, A.; Vallieres, M.

    1981-01-01

    We describe (i) a mapping, using a multishell seniority basis, from a prescribed subspace of a shell model space to an associated boson space. (ii) A new dynamical procedure for selecting the collective variables within the boson space, based on the invariance of the trace. (iii) A comparison with exact calculations for a multi-level pairing model, to demonstrate that the method works. (orig.)

  4. Extrapolation method in the Monte Carlo Shell Model and its applications

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-01-01

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.

  5. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  6. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  7. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate....... An illustrative synthetic example is analyzed, and prediction accuracy measures are compared between the different variants...

  8. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  9. Attitude Research in Science Education: Contemporary Models and Methods.

    Science.gov (United States)

    Crawley, Frank E.; Kobala, Thomas R., Jr.

    1994-01-01

    Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…

  10. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  11. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  12. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  13. A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu

    2016-12-01

    The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutral physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.

  14. Characterization of Mycobacterium paratuberculosis by gas-liquid and thin-layer chromatography and rapid demonstration of mycobactin dependence using radiometric methods

    International Nuclear Information System (INIS)

    Damato, J.J.; Knisley, C.; Collins, M.T.

    1987-01-01

    Thirty-six Mycobacterium paratuberculosis isolates of bovine, caprine, and ovine origins were evaluated by using gas-liquid chromatography (GLC), thin-layer chromatography (TLC), and BACTEC 7H12 Middlebrook TB medium in an effort to more rapidly differentiate this group of organisms from other mycobacteria. Bacterial suspensions (0.1 ml) were inoculated by syringe into 7H12 broth containing 2 micrograms of mycobactin P per ml and control broth without mycobactin P. Cultures were incubated at 37 0 C and read daily with a BACTEC Model 301. After 8 days of incubation, the growth index readings for the test broths containing mycobactin P were twice those of the control broths without mycobactin P. Sixty-five isolates of mycobacteria other than M. paratuberculosis were also examined. No difference was noted between the growth index readings of control and mycobactin-containing broths. Except for Mycobacterium avium-Mycobacterium intracellulare, TLC studies differentiated M. paratuberculosis from the other mycobacterial species tested. The GLC data reveal that all M. paratuberculosis isolates had a distinctive peak (14A) which was not found among M. avium-M. intracellulare complex organisms. These data indicate that 7H12 radiometric broth was able to rapidly demonstrate the mycobactin dependence of M. paratuberculosis and GLC and TLC procedures were capable of rapidly differentiating this organism from the other mycobacteria studied

  15. Test results of full-scale high temperature superconductors cable models destined for a 36 kV, 2 kA(rms) utility demonstration

    DEFF Research Database (Denmark)

    Daumling, M.; Rasmussen, C.N.; Hansen, F.

    2001-01-01

    Power cable systems using high temperature superconductors (HTS) are nearing technical feasibility. This presentation summarises the advancements and status of a project aimed at demonstrating a 36 kV, 2 kA(rms) AC cable system by installing a 30 m long full-scale functional model in a power...

  16. An in vitro model demonstrates the potential of neoplastic human germ cells to influence the tumour microenvironment.

    Science.gov (United States)

    Klein, B; Schuppe, H-C; Bergmann, M; Hedger, M P; Loveland, B E; Loveland, K L

    2017-07-01

    Testicular germ cell tumours (TGCT) typically contain high numbers of infiltrating immune cells, yet the functional nature and consequences of interactions between GCNIS (germ cell neoplasia in situ) or seminoma cells and immune cells remain unknown. A co-culture model using the seminoma-derived TCam-2 cell line and peripheral blood mononuclear cells (PBMC, n = 7 healthy donors) was established to investigate how tumour and immune cells each contribute to the cytokine microenvironment associated with TGCT. Three different co-culture approaches were employed: direct contact during culture to simulate in situ cellular interactions occurring within seminomas (n = 9); indirect contact using well inserts to mimic GCNIS, in which a basement membrane separates the neoplastic germ cells and immune cells (n = 3); and PBMC stimulation prior to direct contact during culture to overcome the potential lack of immune cell activation (n = 3). Transcript levels for key cytokines in PBMC and TCam-2 cell fractions were determined using RT-qPCR. TCam-2 cell fractions showed an immediate increase (within 24 h) in several cytokine mRNAs after direct contact with PBMC, whereas immune cell fractions did not. The high levels of interleukin-6 (IL6) mRNA and protein associated with TCam-2 cells implicate this cytokine as important to seminoma physiology. Use of PBMCs from different donors revealed a robust, repeatable pattern of changes in TCam-2 and PBMC cytokine mRNAs, independent of potential inter-donor variation in immune cell responsiveness. This in vitro model recapitulated previous data from clinical TGCT biopsies, revealing similar cytokine expression profiles and indicating its suitability for exploring the in vivo circumstances of TGCT. Despite the limitations of using a cell line to mimic in vivo events, these results indicate how neoplastic germ cells can directly shape the surrounding tumour microenvironment, including by influencing local immune responses. IL6

  17. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  18. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  19. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  20. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  1. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  2. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  3. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  4. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  5. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  6. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  7. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  8. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  9. Generalized framework for context-specific metabolic model extraction methods

    Directory of Open Access Journals (Sweden)

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  10. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  11. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  12. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  13. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Science.gov (United States)

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  14. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  15. A robust method for detecting nuclear materials when the underlying model is inexact

    International Nuclear Information System (INIS)

    Kump, Paul; Bai, Er-Wei; Chan, Kung-sik; Eichinger, William

    2013-01-01

    This paper is concerned with the detection and identification of nuclides from weak and poorly resolved gamma-ray energy spectra when the underlying model is not known exactly. The algorithm proposed and tested here pairs an exciting and relatively new model selection algorithm with the method of total least squares. Gamma-ray counts are modeled as Poisson processes where the average part is taken to be the model and the difference between the observed gamma-ray counts and the model is considered random noise. Physics provides a template for the model, but we add uncertainty to this template to simulate real life conditions. Unlike most model selection algorithms whose utilities are demonstrated asymptotically, our method emphasizes selection when data is fixed and finite (after all, detector data is undoubtedly finite). Simulation examples provided here demonstrate the proposed algorithm performs well. -- Highlights: • Identification of nuclides in the presence of large noise/uncertainty. • Algorithm is based on a Poisson model. • Key idea is the regularized total least squares. • Algorithms are tested and compared with existing methods

  16. Comparison of methods for the analysis of relatively simple mediation models.

    Science.gov (United States)

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  17. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  18. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  19. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  20. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  1. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  2. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  3. RD and D-Programme 2001. Programme for research, development and demonstration of methods for the management and disposal of nuclear waste

    International Nuclear Information System (INIS)

    2001-09-01

    heterogeneity of the rock in the best manner and to serve as a basis for selecting suitable rock for location of deposition tunnels and holes. The chemistry of the groundwater at repository depth has been thoroughly studied, and SKB has developed practical methods for investigating a future site. Microbial processes are a relatively new field that we are continuing to investigate. Matrix diffusion describes how dissolved radionuclides penetrate into the micro fractures in the rock, thereby retarding the migration of radionuclides along large water-bearing fractures in the rock. Diffusion in micro fractures has been thoroughly investigated, but its relationship with the water flow and the geometry of the system of major fractures in the rock needs to be further studied. Postglacial land uplift and climatic variations are examples of processes that influence the evolution of the biosphere in the long term. Knowledge of the mechanisms for transfer of radionuclides between different parts of the biosphere has improved, and the models will be based on this new knowledge. Examples of ecosystems that are studied and dealt with in the models are forest, mireland and sediment. Permafrost is an example of a climatic state that will be more closely investigated, and a project aimed at this is currently planned. Another example is the glacial state. Material analogues occupy a prominent position in the upcoming programme. Typical material analogues are natural deposits of copper or bentonite, but concrete will also be investigated. Specimens of old concrete will be examined, as well as sites where cement minerals occur naturally. The different steps in building the repository, emplacing the canisters and the buffer, and backfilling and closure are tested on Aespoe. Site investigation methods are also developed there. Full-scale tests are being performed at the Aespoe HRL of drilling of deposition holes, emplacement of canisters and bentonite, and backfilling. The Prototype Repository in

  4. Evaluation and demonstration of methods for improved fuel utilization. Second semi-annual progress report, April 1, 1980-September 30, 1980

    International Nuclear Information System (INIS)

    1981-01-01

    Demonstrations are being performed in the Fort Calhoun reactor. The current program consists of two parts, one to demonstrate low leakage fuel management (SAVFUEL - Shimmed And Very Flexible Uranium Element Loading) and the other to demonstrate high burnup. The first part will demonstrate that the power duty cycle which is characteristic of SAVFUEL does not have a deleterious effect on fuel performance, while the second part will demonstrate that the peak rod average burnup of the current 14 x 14 fuel design can be increased to 45 GWD/T. A visual examination conducted at poolside was completed on four fuel assemblies which are scheduled to demonstrate the SAVFUEL power cycle and seventeen fuel assemblies which are scheduled to provide high burnup fuel performance data. Results of visual examinations, shoulder gap closure, fuel assembly growth, and fuel rod channel width measurements are reported which show excellent fuel performance for the high burnup; demonstration assemblies after four exposure cycles. These results support an additional exposure cycle for the high burnup demonstration assemblies which currently have an assembly average burnup up to 37 GWD/T

  5. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  6. Evaluation and demonstration of methods for improved fuel utilization. Third semi-annual progress report, October 1, 1980-March 31, 1981

    International Nuclear Information System (INIS)

    1981-06-01

    The demonstrations are being performed in the Fort Calhoun reactor. The current program consists of two parts, one to demonstrate low leakage fuel management (SAVFUEL - Shimmed And Very Flexible Uranium Element Loading) and the other to demonstrate high burnup. During this period the four SAVFUEL demonstration assemblies were undergoing their second exposure cycle, simulating the SAVFUEL power cycle. In addition, one high burnup demonstration assembly, which is being irradiated for a fifth exposure cycle has achieved a peak rod average burnup of 45 GWD/T which is the burnup originally targeted for this program. This assembly is projected to achieve a peak rod average burnup of 49 GWD/T at the end of its fifth exposure cycle. During this period analyses were performed to determine the sensitivity of the economics to cycle lengths chosen for Fort Calhoun. Cost savings for 18 month cycles relative to 12 month cycles are reported

  7. Construction of dynamic model of CANDU-SCWR using moving boundary method

    International Nuclear Information System (INIS)

    Sun Peiwei; Jiang Jin; Shan Jianqiang

    2011-01-01

    Highlights: → A dynamic model of a CANDU-SCWR is developed. → The advantages of the moving boundary method are demonstrated. → The dynamic behaviours of the CANDU-SCWR are obtained by simulation. → The model can predict the dynamic behaviours of the CANDU-SCWR. → Linear dynamic models for CANDU-SCWR are derived by system identification techniques. - Abstract: CANDU-SCWR (Supercritical Water-Cooled Reactor) is one type of Generation IV reactors being developed in Canada. Its dynamic characteristics are different from existing CANDU reactors due to the supercritical conditions of the coolant. To study the behaviours of such reactors under disturbances and to design adequate control systems, it is essential to have an accurate dynamic model to describe such a reactor. One dynamic model is developed for CANDU-SCWR in this paper. In the model construction process, three regions have been considered: Liquid Region I, Liquid Region II and Vapour Region, depending on bulk and wall temperatures being higher or lower the pseudo-critical temperature. A moving boundary method is used to describe the movement of boundaries across these regions. Some benefits of adopting moving boundary method are illustrated by comparing with the fixed boundary method. The results of the steady-state simulation based on the developed model agree well with the design parameters. The transient simulations demonstrate that the model can predict the dynamic behaviours of CANDU-SCWR. Furthermore, to investigate the responses of the reactor to small amplitude perturbations and to facilitate control system designs, a least-square based system identification technique is used to obtain a set of linear dynamic models around the design point. The responses based on the linear dynamic models are validated with simulation results from nonlinear CANDU-SCWR dynamic model.

  8. A method for physically based model analysis of conjunctive use in response to potential climate changes

    Science.gov (United States)

    Hanson, R.T.; Flint, L.E.; Flint, A.L.; Dettinger, M.D.; Faunt, C.C.; Cayan, D.; Schmid, W.

    2012-01-01

    Potential climate change effects on aspects of conjunctive management of water resources can be evaluated by linking climate models with fully integrated groundwater-surface water models. The objective of this study is to develop a modeling system that links global climate models with regional hydrologic models, using the California Central Valley as a case study. The new method is a supply and demand modeling framework that can be used to simulate and analyze potential climate change and conjunctive use. Supply-constrained and demand-driven linkages in the water system in the Central Valley are represented with the linked climate models, precipitation-runoff models, agricultural and native vegetation water use, and hydrologic flow models to demonstrate the feasibility of this method. Simulated precipitation and temperature were used from the GFDL-A2 climate change scenario through the 21st century to drive a regional water balance mountain hydrologic watershed model (MHWM) for the surrounding watersheds in combination with a regional integrated hydrologic model of the Central Valley (CVHM). Application of this method demonstrates the potential transition from predominantly surface water to groundwater supply for agriculture with secondary effects that may limit this transition of conjunctive use. The particular scenario considered includes intermittent climatic droughts in the first half of the 21st century followed by severe persistent droughts in the second half of the 21st century. These climatic droughts do not yield a valley-wide operational drought but do cause reduced surface water deliveries and increased groundwater abstractions that may cause additional land subsidence, reduced water for riparian habitat, or changes in flows at the Sacramento-San Joaquin River Delta. The method developed here can be used to explore conjunctive use adaptation options and hydrologic risk assessments in regional hydrologic systems throughout the world.

  9. RD and D-Programme 2001. Programme for research, development and demonstration of methods for the management and disposal of nuclear waste

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-09-01

    heterogeneity of the rock in the best manner and to serve as a basis for selecting suitable rock for location of deposition tunnels and holes. The chemistry of the groundwater at repository depth has been thoroughly studied, and SKB has developed practical methods for investigating a future site. Microbial processes are a relatively new field that we are continuing to investigate. Matrix diffusion describes how dissolved radionuclides penetrate into the micro fractures in the rock, thereby retarding the migration of radionuclides along large water-bearing fractures in the rock. Diffusion in micro fractures has been thoroughly investigated, but its relationship with the water flow and the geometry of the system of major fractures in the rock needs to be further studied. Postglacial land uplift and climatic variations are examples of processes that influence the evolution of the biosphere in the long term. Knowledge of the mechanisms for transfer of radionuclides between different parts of the biosphere has improved, and the models will be based on this new knowledge. Examples of ecosystems that are studied and dealt with in the models are forest, mireland and sediment. Permafrost is an example of a climatic state that will be more closely investigated, and a project aimed at this is currently planned. Another example is the glacial state. Material analogues occupy a prominent position in the upcoming programme. Typical material analogues are natural deposits of copper or bentonite, but concrete will also be investigated. Specimens of old concrete will be examined, as well as sites where cement minerals occur naturally. The different steps in building the repository, emplacing the canisters and the buffer, and backfilling and closure are tested on Aespoe. Site investigation methods are also developed there. Full-scale tests are being performed at the Aespoe HRL of drilling of deposition holes, emplacement of canisters and bentonite, and backfilling. The Prototype Repository in

  10. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  11. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    Science.gov (United States)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  12. Carbon dioxide dangers demonstration model

    Science.gov (United States)

    Venezky, Dina; Wessells, Stephen

    2010-01-01

    Carbon dioxide is a dangerous volcanic gas. When carbon dioxide seeps from the ground, it normally mixes with the air and dissipates rapidly. However, because carbon dioxide gas is heavier than air, it can collect in snowbanks, depressions, and poorly ventilated enclosures posing a potential danger to people and other living things. In this experiment we show how carbon dioxide gas displaces oxygen as it collects in low-lying areas. When carbon dioxide, created by mixing vinegar and baking soda, is added to a bowl with candles of different heights, the flames are extinguished as if by magic.

  13. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  14. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  15. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  16. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  17. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  18. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  19. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  20. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  1. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  2. A meshless method for modeling convective heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David B [Los Alamos National Laboratory

    2010-01-01

    A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.

  3. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  4. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  5. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  6. The Langevin method and Hubbard-like models

    International Nuclear Information System (INIS)

    Gross, M.; Hamber, H.

    1989-01-01

    The authors reexamine the difficulties associated with application of the Langevin method to numerical simulation of models with non-positive definite statistical weights, including the Hubbard model. They show how to avoid the violent crossing of the zeroes of the weight and how to move those nodes away from the real axis. However, it still appears necessary to keep track of the sign (or phase) of the weight

  7. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  8. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  11. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  12. A New Method to Retrieve the Data Requirements of the Remote Sensing Community – Exemplarily Demonstrated for Hyperspectral User NEEDS

    Directory of Open Access Journals (Sweden)

    Ils Reusen

    2007-08-01

    Full Text Available User-driven requirements for remote sensing data are difficult to define,especially details on geometric, spectral and radiometric parameters. Even more difficult isa decent assessment of the required degrees of processing and corresponding data quality. Itis therefore a real challenge to appropriately assess data costs and services to be provided.In 2006, the HYRESSA project was initiated within the framework 6 programme of theEuropean Commission to analyze the user needs of the hyperspectral remote sensingcommunity. Special focus was given to finding an answer to the key question, “What arethe individual user requirements for hyperspectral imagery and its related data products?”.A Value-Benefit Analysis (VBA was performed to retrieve user needs and address openitems accordingly. The VBA is an established tool for systematic problem solving bysupporting the possibility of comparing competing projects or solutions. It enablesevaluation on the basis of a multidimensional objective model and can be augmented withexpert’s preferences. After undergoing a VBA, the scaling method (e.g., Law ofComparative Judgment was applied for achieving the desired ranking judgments. Theresult, which is the relative value of projects with respect to a well-defined main objective,can therefore be produced analytically using a VBA. A multidimensional objective modeladhering to VBA methodology was established. Thereafter, end users and experts wererequested to fill out a Questionnaire of User Needs (QUN at the highest level of detail -the value indicator level. The end user was additionally requested to report personalpreferences for his particular research field. In the end, results from the experts’ evaluationand results from a sensor data survey can be compared in order to understand user needsand the drawbacks of existing data products. The investigation – focusing on the needs of the hyperspectral user

  13. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  14. Parents as Teachers Health Literacy Demonstration project: integrating an empowerment model of health literacy promotion into home-based parent education.

    Science.gov (United States)

    Carroll, Lauren N; Smith, Sandra A; Thomson, Nicole R

    2015-03-01

    The Parents as Teachers (PAT) Health Literacy Demonstration project assessed the impact of integrating data-driven reflective practices into the PAT home visitation model to promote maternal health literacy. PAT is a federally approved Maternal, Infant, Early Childhood Home Visiting program with the goal of promoting school readiness and healthy child development. This 2-year demonstration project used an open-cohort longitudinal design to promote parents' interactive and reflective skills, enhance health education, and provide direct assistance to personalize and act on information by integrating an empowerment paradigm into PAT's parent education model. Eight parent educators used the Life Skills Progression instrument to tailor the intervention to each of 103 parent-child dyads. Repeated-measures analysis of variance, paired t tests, and logistic regression combined with qualitative data demonstrated that mothers achieved overall significant improvements in health literacy, and that home visitors are important catalysts for these improvements. These findings support the use of an empowerment model of health education, skill building, and direct information support to enable parents to better manage personal and child health and health care. © 2014 Society for Public Health Education.

  15. A fuzzy inventory model with acceptable shortage using graded mean integration value method

    Science.gov (United States)

    Saranya, R.; Varadarajan, R.

    2018-04-01

    In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.

  16. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  17. Evaluation of methods and tools to develop safety concepts and to demonstrate safety for an HLW repository in salt. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bollingerfehr, W.; Buhmann, D.; Doerr, S.; and others

    2017-03-15

    Salt formations have been the preferred option as host rocks for the disposal of high level radioactive waste in Germany for more than 40 years. During this period comprehensive geological investigations have been carried out together with a broad spectrum of concept and safety related R and D work. The behaviour of an HLW repository in salt formations, particularly in salt domes, has been analysed in terms of assessment of the total system performance. This was first carried out for concepts of generic waste repositories in salt and, since 1998, for a repository concept with specific boundary conditions, taking the geology of the Gorleben salt dome as an example. Suitable repository concepts and designs were developed, the technical feasibility has been proven and operational and long-term safety evaluated. Numerical modelling is an important input into the development of a comprehensive safety case for a waste repository. Significant progress in the development of numerical tools and their application for long-term safe ty assessment has been made in the last two decades. An integrated approach has been used in which the repository concept and relevant scientific and engineering data are combined with the results from iterative safety assessments to increase the clarity and the traceability of the evaluation. A safety concept that takes full credit of the favourable properties of salt formations was developed in the course of the R and D project ISIBEL, which started in 2005. This concept is based on the safe containment of radioactive waste in a specific part of the host rock formation, termed the containment providing rock zone, which comprises the geological barrier, the geotechnical barriers and the compacted backfill. The future evolution of the repository system will be analysed using a catalogue of Features, Events and Processes (FEP), scenario development and numerical analysis, all of which are adapted to suit the safety concept. Key elements of the

  18. Evaluation of methods and tools to develop safety concepts and to demonstrate safety for an HLW repository in salt. Final report

    International Nuclear Information System (INIS)

    Bollingerfehr, W.; Buhmann, D.; Doerr, S.

    2017-03-01

    Salt formations have been the preferred option as host rocks for the disposal of high level radioactive waste in Germany for more than 40 years. During this period comprehensive geological investigations have been carried out together with a broad spectrum of concept and safety related R and D work. The behaviour of an HLW repository in salt formations, particularly in salt domes, has been analysed in terms of assessment of the total system performance. This was first carried out for concepts of generic waste repositories in salt and, since 1998, for a repository concept with specific boundary conditions, taking the geology of the Gorleben salt dome as an example. Suitable repository concepts and designs were developed, the technical feasibility has been proven and operational and long-term safety evaluated. Numerical modelling is an important input into the development of a comprehensive safety case for a waste repository. Significant progress in the development of numerical tools and their application for long-term safe ty assessment has been made in the last two decades. An integrated approach has been used in which the repository concept and relevant scientific and engineering data are combined with the results from iterative safety assessments to increase the clarity and the traceability of the evaluation. A safety concept that takes full credit of the favourable properties of salt formations was developed in the course of the R and D project ISIBEL, which started in 2005. This concept is based on the safe containment of radioactive waste in a specific part of the host rock formation, termed the containment providing rock zone, which comprises the geological barrier, the geotechnical barriers and the compacted backfill. The future evolution of the repository system will be analysed using a catalogue of Features, Events and Processes (FEP), scenario development and numerical analysis, all of which are adapted to suit the safety concept. Key elements of the

  19. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  20. Toric Lego: A method for modular model building

    CERN Document Server

    Balasubramanian, Vijay; García-Etxebarria, Iñaki

    2010-01-01

    Within the context of local type IIB models arising from branes at toric Calabi-Yau singularities, we present a systematic way of joining any number of desired sectors into a consistent theory. The different sectors interact via massive messengers with masses controlled by tunable parameters. We apply this method to a toy model of the minimal supersymmetric standard model (MSSM) interacting via gauge mediation with a metastable supersymmetry breaking sector and an interacting dark matter sector. We discuss how a mirror procedure can be applied in the type IIA case, allowing us to join certain intersecting brane configurations through massive mediators.