WorldWideScience

Sample records for methods modeling demonstrated

  1. Demonstrating sustainable energy: A review-based model of sustainable energy demonstration projects

    NARCIS (Netherlands)

    Bossink, Bart

    2017-01-01

    This article develops a model of sustainable energy demonstration projects, based on a review of 229 scientific publications on demonstrations in renewable and sustainable energy. The model addresses the basic organizational characteristics (aim, cooperative form, and physical location) and learning

  2. Demonstrations in Solute Transport Using Dyes: Part II. Modeling.

    Science.gov (United States)

    Butters, Greg; Bandaranayake, Wije

    1993-01-01

    A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)

  3. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  4. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  5. Background Model for the Majorana Demonstrator

    Science.gov (United States)

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  6. Demonstration and evaluation of a method for assessing mediated moderation.

    Science.gov (United States)

    Morgan-Lopez, Antonio A; MacKinnon, David P

    2006-02-01

    Mediated moderation occurs when the interaction between two variables affects a mediator, which then affects a dependent variable. In this article, we describe the mediated moderation model and evaluate it with a statistical simulation using an adaptation of product-of-coefficients methods to assess mediation. We also demonstrate the use of this method with a substantive example from the adolescent tobacco literature. In the simulation, relative bias (RB) in point estimates and standard errors did not exceed problematic levels of +/- 10% although systematic variability in RB was accounted for by parameter size, sample size, and nonzero direct effects. Power to detect mediated moderation effects appears to be severely compromised under one particular combination of conditions: when the component variables that make up the interaction terms are correlated and partial mediated moderation exists. Implications for the estimation of mediated moderation effects in experimental and nonexperimental research are discussed.

  7. Proceedings of the workshop on review of dose modeling methods for demonstration of compliance with the radiological criteria for license termination

    International Nuclear Information System (INIS)

    Nicholson, T.J.; Parrott, J.D.

    1998-05-01

    The workshop was one in a series to support NRC staff development of guidance for implementing the final rule on ''Radiological Criteria for License Termination.'' The workshop topics included discussion of: dose models used for decommissioning reviews; identification of criteria for evaluating the acceptability of dose models; and selection of parameter values for demonstrating compliance with the final rule. The 2-day public workshop was jointly organized by RES and NMSS staff responsible for reviewing dose modeling methods used in decommissioning reviews. The workshop was noticed in the Federal Register (62 FR 51706). The workshop presenters included: NMSS and RES staff, who discussed both dose modeling needs for licensing reviews, and development of guidance related to dose modeling and parameter selection needs; DOE national laboratory scientists, who provided responses to earlier NRC staff-developed questions and discussed their various Federally-sponsored dose models (i.e., DandD, RESRAD, and MEPAS codes); and an EPA scientist, who presented details on the EPA dose assessment model (i.e., PRESTO code). The workshop was formatted to provide opportunities for the attendees to observe computer demonstrations of the dose codes presented. More than 120 workshop attendees from NRC Headquarters and the Regions, Agreement States; as well as industry representatives and consultants; scientists from EPA, DOD, DNFSB, DOE, and the national laboratories; and interested members of the public participated. A complete transcript of the workshop, including viewgraphs and attendance lists, is available in the NRC Public Document Room. This NUREG/CP documents the formal presentations made during the workshop, and provides a preface outlining the workshop's focus, objectives, background, topics and questions provided to the invited speakers, and those raised during the panel discussion. NUREG/CP-0163 also provides technical bases supporting the development of decommissioning

  8. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  9. Demonstration model of LEP bending magnet

    CERN Multimedia

    CERN PhotoLab

    1981-01-01

    To save iron and raise the flux density, the LEP bending magnet laminations were separated by spacers and the space between the laminations was filled with concrete. This is a demonstration model, part of it with the spaced laminations only, the other part filled with concrete.

  10. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  11. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  12. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  13. Modelling and Simulation of National Electronic Product Code Network Demonstrator Project

    Science.gov (United States)

    Mo, John P. T.

    The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.

  14. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  15. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  16. A low-cost approach for rapidly creating demonstration models for hands-on learning

    Science.gov (United States)

    Kinzli, Kristoph-Dietrich; Kunberger, Tanya; O'Neill, Robert; Badir, Ashraf

    2018-01-01

    Demonstration models allow students to readily grasp theory and relate difficult concepts and equations to real life. However drawbacks of using these demonstration models are that they are can be costly to purchase from vendors or take a significant amount of time to build. These two limiting factors can pose a significant obstacle for adding demonstrations to the curriculum. This article presents an assignment to overcome these obstacles, which has resulted in 36 demonstration models being added to the curriculum. The article also presents the results of student performance on course objectives as a result of the developed models being used in the classroom. Overall, significant improvement in student learning outcomes, due to the addition of demonstration models, has been observed.

  17. Introduction to Methods Demonstrations for Authentication

    International Nuclear Information System (INIS)

    Kouzes, Richard T.; Hansen, Randy R.; Pitts, W. K.

    2002-01-01

    During the Trilateral Initiative Technical Workshop on Authentication and Certification, PNNL will demonstrate some authentication technologies. This paper briefly describes the motivation for these demonstrations and provide background on them

  18. A demonstration of mixed-methods research in the health sciences.

    Science.gov (United States)

    Katz, Janet; Vandermause, Roxanne; McPherson, Sterling; Barbosa-Leiker, Celestina

    2016-11-18

    Background The growth of patient, community and population-centred nursing research is a rationale for the use of research methods that can examine complex healthcare issues, not only from a biophysical perspective, but also from cultural, psychosocial and political viewpoints. This need for multiple perspectives requires mixed-methods research. Philosophy and practicality are needed to plan, conduct, and make mixed-methods research more broadly accessible to the health sciences research community. The traditions and dichotomy between qualitative and quantitative research makes the application of mixed methods a challenge. Aim To propose an integrated model for a research project containing steps from start to finish, and to use the unique strengths brought by each approach to meet the health needs of patients and communities. Discussion Mixed-methods research is a practical approach to inquiry, that focuses on asking questions and how best to answer them to improve the health of individuals, communities and populations. An integrated model of research begins with the research question(s) and moves in a continuum. The lines dividing methods do not dissolve, but become permeable boundaries where two or more methods can be used to answer research questions more completely. Rigorous and expert methodologists work together to solve common problems. Conclusion Mixed-methods research enables discussion among researchers from varied traditions. There is a plethora of methodological approaches available. Combining expertise by communicating across disciplines and professions is one way to tackle large and complex healthcare issues. Implications for practice The model presented in this paper exemplifies the integration of multiple approaches in a unified focus on identified phenomena. The dynamic nature of the model signals a need to be open to the data generated and the methodological directions implied by findings.

  19. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  20. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    Science.gov (United States)

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  1. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  2. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    International Nuclear Information System (INIS)

    Martin, William G.K.; Hasekamp, Otto P.

    2018-01-01

    Highlights: • We demonstrate adjoint methods for atmospheric remote sensing in a two-dimensional setting. • Searchlight functions are used to handle the singularity of measurement response functions. • Adjoint methods require two radiative transfer calculations to evaluate the measurement misfit function and its derivatives with respect to all unknown parameters. • Synthetic retrieval studies show the scalability of adjoint methods to problems with thousands of measurements and unknown parameters. • Adjoint methods and the searchlight function technique are generalizable to 3D remote sensing. - Abstract: In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also

  3. Demonstration test of in-service inspection methods

    International Nuclear Information System (INIS)

    Takumi, Kenji

    1987-01-01

    The major objectives of the project are: (1) to demonstrate the reliability of a manual ultrasonic flaw detector and techniques that are used in operating light water reactor plants and (2) to demonstrate the performance and reliability of an automatic ultrasonic flaw detector that is designed to shorten the time required for ISI work and reduce the exposure risk of inspection personnel. The test project consists of three stages. In the first stage, which ended in 1982, defects were added intentionally to a model structure the same in size as a typical 1.1 million kW BWR plant and manual ultrasonic flaw detection tensting was performed. In the second stage, completed in 1984, automatic eddy-current flaw detection testing was carried out for defects in heat transfer piping of a PWP steam generator. In the third stage, which started in 1981 and ended in March 1987, a newly developed automatic ultrasonic flaw detector is applied to testing of defects used for the manual detector performance evaluation. Results have shown that the automatic eddy-current flaw detector under test has an adequately stable performance for practical uses, with a very high reproducibility to permit close inspection of secular deterioration in heat transfer pipes. It has also revealed that both the manual and automatic ultrasonic flaw detectors under test can detect all defects that do not comply with the ASME standards. (Nogami, K.)

  4. SRC-I demonstration plant analytical laboratory methods manual. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.; Skinner, R.W.; Znaimer, S.

    1983-03-01

    This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.

  5. Demonstration of improved seismic source inversion method of tele-seismic body wave

    Science.gov (United States)

    Yagi, Y.; Okuwaki, R.

    2017-12-01

    Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.

  6. Effectiveness of Video Demonstration over Conventional Methods in Teaching Osteology in Anatomy.

    Science.gov (United States)

    Viswasom, Angela A; Jobby, Abraham

    2017-02-01

    Technology and its applications are the most happening things in the world. So, is it in the field of medical education. This study was an evaluation of whether the conventional methods can compete with the test of technology. A comparative study of traditional method of teaching osteology in human anatomy with an innovative visual aided method. The study was conducted on 94 students admitted to MBBS 2014 to 2015 batch of Travancore Medical College. The students were divided into two academically validated groups. They were taught using conventional and video demonstrational techniques in a systematic manner. Post evaluation tests were conducted. Analysis of the mark pattern revealed that the group taught using traditional method scored better when compared to the visual aided method. Feedback analysis showed that, the students were able to identify bony features better with clear visualisation and three dimensional view when taught using the video demonstration method. The students identified visual aided method as the more interesting one for learning which helped them in applying the knowledge gained. In most of the questions asked, the two methods of teaching were found to be comparable on the same scale. As the study ends, we discover that, no new technique can be substituted for time tested techniques of teaching and learning. The ideal method would be incorporating newer multimedia techniques into traditional classes.

  7. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  8. Demonstration of the gypsy moth energy budget microclimate model

    Science.gov (United States)

    D. E. Anderson; D. R. Miller; W. E. Wallner

    1991-01-01

    The use of a "User friendly" version of "GMMICRO" model to quantify the local environment and resulting core temperature of GM larvae under different conditions of canopy defoliation, different forest sites, and different weather conditions was demonstrated.

  9. Modeling framework for representing long-term effectiveness of best management practices in addressing hydrology and water quality problems: Framework development and demonstration using a Bayesian method

    Science.gov (United States)

    Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta

    2018-05-01

    Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.

  10. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  11. Modelling viscoacoustic wave propagation with the lattice Boltzmann method.

    Science.gov (United States)

    Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen

    2017-08-31

    In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.

  12. MANIKIN DEMONSTRATION IN TEACHING CONSERVATIVE MANAGEMENT OF POSTPARTUM HAEMORRHAGE: A COMPARISON WITH CONVENTIONAL METHODS

    Directory of Open Access Journals (Sweden)

    Sathi Mangalam Saraswathi

    2016-07-01

    Full Text Available BACKGROUND Even though there are many innovative methods to make classes more interesting and effective, in my department, topics are taught mainly by didactic lectures. This study attempts to compare the effectiveness of manikin demonstration and didactic lectures in teaching conservative management of post-partum haemorrhage. OBJECTIVE To compare the effectiveness of manikin demonstration and didactic lectures in teaching conservative management of postpartum haemorrhage. MATERIALS AND METHODS This is an observational study. Eighty four ninth-semester MBBS students posted in Department of Obstetrics and Gynaecology, Government Medical College, Kottayam were selected. They were divided into 2 groups by lottery method. Pre-test was conducted for both groups. Group A was taught by manikin demonstration. Group B was taught by didactic lecture. Feedback response from the students collected after demonstration class was analysed. Post-test was conducted for both the groups after one week. Gain in knowledge of both the groups were calculated from pre-test and post-test scores and compared by Independent sample t test. RESULTS The mean gain in knowledge in group A was 6.4 when compared to group B which is 4.3 and the difference was found to be statistically significant. All of the students in group A felt satisfied and more confident after the class and wanted more topics to be taken by demonstration. CONCLUSION Manikin demonstration class is more effective in teaching conservative management of post-partum haemorrhage and this method can be adopted to teach similar topics in clinical subjects.

  13. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  14. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  15. Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle

    2002-01-01

    In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....

  16. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  17. Background field method for nonlinear σ-model in stochastic quantization

    International Nuclear Information System (INIS)

    Nakazawa, Naohito; Ennyu, Daiji

    1988-01-01

    We formulate the background field method for the nonlinear σ-model in stochastic quantization. We demonstrate a one-loop calculation for a two-dimensional non-linear σ-model on a general riemannian manifold based on our formulation. The formulation is consistent with the known results in ordinary quantization. As a simple application, we also analyse the multiplicative renormalization of the O(N) nonlinear σ-model. (orig.)

  18. A robust method for detecting nuclear materials when the underlying model is inexact

    International Nuclear Information System (INIS)

    Kump, Paul; Bai, Er-Wei; Chan, Kung-sik; Eichinger, William

    2013-01-01

    This paper is concerned with the detection and identification of nuclides from weak and poorly resolved gamma-ray energy spectra when the underlying model is not known exactly. The algorithm proposed and tested here pairs an exciting and relatively new model selection algorithm with the method of total least squares. Gamma-ray counts are modeled as Poisson processes where the average part is taken to be the model and the difference between the observed gamma-ray counts and the model is considered random noise. Physics provides a template for the model, but we add uncertainty to this template to simulate real life conditions. Unlike most model selection algorithms whose utilities are demonstrated asymptotically, our method emphasizes selection when data is fixed and finite (after all, detector data is undoubtedly finite). Simulation examples provided here demonstrate the proposed algorithm performs well. -- Highlights: • Identification of nuclides in the presence of large noise/uncertainty. • Algorithm is based on a Poisson model. • Key idea is the regularized total least squares. • Algorithms are tested and compared with existing methods

  19. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    Science.gov (United States)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote

  20. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  1. Smart grid demonstrators and experiments in France: Economic assessments of smart grids. Challenges, methods, progress status and demonstrators; Contribution of 'smart grid' demonstrators to electricity transport and market architectures; Challenges and contributions of smart grid demonstrators to the distribution network. Focus on the integration of decentralised production; Challenges and contributions of smart grid demonstrators to the evolution of providing-related professions and to consumption practices

    International Nuclear Information System (INIS)

    Sudret, Thierry; Belhomme, Regine; Nekrassov, Andrei; Chartres, Sophie; Chiappini, Florent; Drouineau, Mathilde; Hadjsaid, Nouredine; Leonard, Cedric; Bena, Michel; Buhagiar, Thierry; Lemaitre, Christian; Janssen, Tanguy; Guedou, Benjamin; Viana, Maria Sebastian; Malarange, Gilles; Hadjsaid, Nouredine; Petit, Marc; Lehec, Guillaume; Jahn, Rafael; Gehain, Etienne

    2015-01-01

    This publication proposes a set of four articles which give an overview of challenges and contributions of smart grid demonstrators for the French electricity system according to different perspectives and different stakeholders. These articles present the first lessons learned from these demonstrators in terms of technical and technological innovations, of business and regulation models, and of customer behaviour and acceptance. More precisely, the authors discuss economic assessments of smart grids with an overview of challenges, methods, progress status and existing smart grid programs in the World, comment the importance of the introduction of intelligence at hardware, software and market level, highlight the challenges and contributions of smart grids for the integration of decentralised production, and discuss how smart grid demonstrators impact providing-related professions and customer consumption practices

  2. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  3. A modeling method of semiconductor fabrication flows with extended knowledge hybrid Petri nets

    Institute of Scientific and Technical Information of China (English)

    Zhou Binghai; Jiang Shuyu; Wang Shijin; Wu bin

    2008-01-01

    A modeling method of extended knowledge hybrid Petri nets (EKHPNs), incorporating object-oriented methods into hybrid Petri nets (HPNs), was presented and used for the representation and modeling of semiconductor wafer fabrication flows. To model the discrete and continuous parts of a complex semiconductor wafer fabrication flow, the HPNs were introduced into the EKHPNs. Object-oriented methods were combined into the EKHPNs for coping with the complexity of the fabrication flow. Knowledge annotations were introduced to solve input and output conflicts of the EKHPNs.Finally, to demonstrate the validity of the EKHPN method, a real semiconductor wafer fabrication case was used to illustrate the modeling procedure. The modeling results indicate that the proposed method can be used to model a complex semiconductor wafer fabrication flow expediently.

  4. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  5. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  6. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  7. Construction of dynamic model of CANDU-SCWR using moving boundary method

    International Nuclear Information System (INIS)

    Sun Peiwei; Jiang Jin; Shan Jianqiang

    2011-01-01

    Highlights: → A dynamic model of a CANDU-SCWR is developed. → The advantages of the moving boundary method are demonstrated. → The dynamic behaviours of the CANDU-SCWR are obtained by simulation. → The model can predict the dynamic behaviours of the CANDU-SCWR. → Linear dynamic models for CANDU-SCWR are derived by system identification techniques. - Abstract: CANDU-SCWR (Supercritical Water-Cooled Reactor) is one type of Generation IV reactors being developed in Canada. Its dynamic characteristics are different from existing CANDU reactors due to the supercritical conditions of the coolant. To study the behaviours of such reactors under disturbances and to design adequate control systems, it is essential to have an accurate dynamic model to describe such a reactor. One dynamic model is developed for CANDU-SCWR in this paper. In the model construction process, three regions have been considered: Liquid Region I, Liquid Region II and Vapour Region, depending on bulk and wall temperatures being higher or lower the pseudo-critical temperature. A moving boundary method is used to describe the movement of boundaries across these regions. Some benefits of adopting moving boundary method are illustrated by comparing with the fixed boundary method. The results of the steady-state simulation based on the developed model agree well with the design parameters. The transient simulations demonstrate that the model can predict the dynamic behaviours of CANDU-SCWR. Furthermore, to investigate the responses of the reactor to small amplitude perturbations and to facilitate control system designs, a least-square based system identification technique is used to obtain a set of linear dynamic models around the design point. The responses based on the linear dynamic models are validated with simulation results from nonlinear CANDU-SCWR dynamic model.

  8. Explicitly represented polygon wall boundary model for the explicit MPS method

    Science.gov (United States)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  9. Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial

    Science.gov (United States)

    Jackson, Christopher; Jit, Mark; Sharples, Linda; DeAngelis, Daniela

    2016-01-01

    Summary Decision-analytic models must often be informed using data which are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters, and how those parameters are related to quantities of interest for decision-making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data, and at the same time, propagates all parameter uncertainties to the results used for decision-making. To illustrate these methods, the authors demonstrate how a previously-developed Markov model for the progression of human papillomavirus (HPV16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV16 and HPV16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified, but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasise the appropriate choice of prior distributions and checking and comparison of fitted models. PMID:23886677

  10. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  11. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  12. Demonstration of a forward iterative method to reconstruct brachytherapy seed configurations from x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Martin J; Todor, Dorin A [Department of Radiation Oncology, Virginia Commonwealth University, Richmond VA 23298 (United States)

    2005-06-07

    By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.

  13. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  14. Vibration-Based Damage Diagnosis in a Laboratory Cable-Stayed Bridge Model via an RCP-ARX Model Based Method

    International Nuclear Information System (INIS)

    Michaelides, P G; Apostolellis, P G; Fassois, S D

    2011-01-01

    Vibration-based damage detection and identification in a laboratory cable-stayed bridge model is addressed under inherent, environmental, and experimental uncertainties. The problem is challenging as conventional stochastic methods face difficulties due to uncertainty underestimation. A novel method is formulated based on identified Random Coefficient Pooled ARX (RCP-ARX) representations of the dynamics and statistical hypothesis testing. The method benefits from the ability of RCP models in properly capturing uncertainty. Its effectiveness is demonstrated via a high number of experiments under a variety of damage scenarios.

  15. Vibration-Based Damage Diagnosis in a Laboratory Cable-Stayed Bridge Model via an RCP-ARX Model Based Method

    Energy Technology Data Exchange (ETDEWEB)

    Michaelides, P G; Apostolellis, P G; Fassois, S D, E-mail: mixail@mech.upatras.gr, E-mail: fassois@mech.upatras.gr [Laboratory for Stochastic Mechanical Systems and Automation (SMSA), Department of Mechanical and Aeronautical Engineering, University of Patras, GR 265 00 Patras (Greece)

    2011-07-19

    Vibration-based damage detection and identification in a laboratory cable-stayed bridge model is addressed under inherent, environmental, and experimental uncertainties. The problem is challenging as conventional stochastic methods face difficulties due to uncertainty underestimation. A novel method is formulated based on identified Random Coefficient Pooled ARX (RCP-ARX) representations of the dynamics and statistical hypothesis testing. The method benefits from the ability of RCP models in properly capturing uncertainty. Its effectiveness is demonstrated via a high number of experiments under a variety of damage scenarios.

  16. A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu

    2016-12-01

    The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutral physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.

  17. Location, Location, Location! Demonstrating the Mnemonic Benefit of the Method of Loci

    Science.gov (United States)

    McCabe, Jennifer A.

    2015-01-01

    Classroom demonstrations of empirically supported learning and memory strategies have the potential to boost students' knowledge about their own memory and convince them to change the way they approach memory tasks in and beyond the classroom. Students in a "Human Learning and Memory" course learned about the "Method of Loci"…

  18. Presentation on the Modeling and Educational Demonstrations Laboratory Curriculum Materials Center (MEDL-CMC): A Working Model and Progress Report

    Science.gov (United States)

    Glesener, G. B.; Vican, L.

    2015-12-01

    Physical analog models and demonstrations can be effective educational tools for helping instructors teach abstract concepts in the Earth, planetary, and space sciences. Reducing the learning challenges for students using physical analog models and demonstrations, however, can often increase instructors' workload and budget because the cost and time needed to produce and maintain such curriculum materials is substantial. First, this presentation describes a working model for the Modeling and Educational Demonstrations Laboratory Curriculum Materials Center (MEDL-CMC) to support instructors' use of physical analog models and demonstrations in the science classroom. The working model is based on a combination of instructional resource models developed by the Association of College & Research Libraries and by the Physics Instructional Resource Association. The MEDL-CMC aims to make the curriculum materials available for all science courses and outreach programs within the institution where the MEDL-CMC resides. The sustainability and value of the MEDL-CMC comes from its ability to provide and maintain a variety of physical analog models and demonstrations in a wide range of science disciplines. Second, the presentation then reports on the development, progress, and future of the MEDL-CMC at the University of California Los Angeles (UCLA). Development of the UCLA MEDL-CMC was funded by a grant from UCLA's Office of Instructional Development and is supported by the Department of Earth, Planetary, and Space Sciences. Other UCLA science departments have recently shown interest in the UCLA MEDL-CMC services, and therefore, preparations are currently underway to increase our capacity for providing interdepartmental service. The presentation concludes with recommendations and suggestions for other institutions that wish to start their own MEDL-CMC in order to increase educational effectiveness and decrease instructor workload. We welcome an interuniversity collaboration to

  19. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  20. Acting Locally: A Guide to Model, Community and Demonstration Forests.

    Science.gov (United States)

    Keen, Debbie Pella

    1993-01-01

    Describes Canada's efforts in sustainable forestry, which refers to management practices that ensure long-term health of forest ecosystems so that they can continue to provide environmental, social, and economic benefits. Describes model forests, community forests, and demonstration forests and lists contacts for each of the projects. (KS)

  1. Models and methods in thermoluminescence

    International Nuclear Information System (INIS)

    Furetta, C.

    2005-01-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  2. Models and methods in thermoluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)

    2005-07-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  3. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  4. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    Science.gov (United States)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  5. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  6. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  7. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  8. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  9. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  10. Developing energy forecasting model using hybrid artificial intelligence method

    Institute of Scientific and Technical Information of China (English)

    Shahram Mollaiy-Berneti

    2015-01-01

    An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.

  11. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  12. Huffman and linear scanning methods with statistical language models.

    Science.gov (United States)

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  13. Development of a Terrestrial Modeling System: The China-wide Demonstration

    Science.gov (United States)

    Duan, Q.; Dai, Y.; Zheng, X.; Ye, A.; Chen, Z.; Shangguang, W.

    2010-12-01

    A terrestrial modeling system (TMS) is being developed at Beijing Normal University. The purposes of TMS are (1) to provide a land surface parameterization scheme fully capable of being coupled with and climate and Earth system models of different scales; (2) to provide a standalone platform for simulation and prediction of land surface processes; and (3) to provide a platform for studying human-Earth system interactions. This system will build on and extend existing capabilities at BNU, including the Common Land Model (CoLM) system, high-resolution atmospheric forcing data sets, high-resolution soil and vegetation data sets, and high-performance computing facilities and software. This presentation describes the system design and demonstrates the initial capabilities of TMS in simulating water and energy fluxes over the continental China for a multi-year period.

  14. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    Science.gov (United States)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an

  15. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  16. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  17. A new teaching model for demonstrating the movement of the extraocular muscles.

    Science.gov (United States)

    Iwanaga, Joe; Refsland, Jason; Iovino, Lee; Holley, Gary; Laws, Tyler; Oskouian, Rod J; Tubbs, R Shane

    2017-09-01

    The extraocular muscles consist of the superior, inferior, lateral, and medial rectus muscles and the superior and inferior oblique muscles. This study aimed to create a new teaching model for demonstrating the function of the extraocular muscles. A coronal section of the head was prepared and sutures attached to the levator palpebral superioris muscle and six extraocular muscles. Tension was placed on each muscle from a posterior approach and movement of the eye documented from an anterior view. All movements were clearly seen less than that of the inferior rectus muscle. To our knowledge, this is the first cadaveric teaching model for demonstrating the movements of the extraocular muscles. Clin. Anat. 30:733-735, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced

  19. A miniature research vessel: A small-scale ocean-exploration demonstration of geophysical methods

    Science.gov (United States)

    Howell, S. M.; Boston, B.; Sleeper, J. D.; Cameron, M. E.; Togia, H.; Anderson, A.; Sigurdardottir, T. D.; Tree, J. P.

    2015-12-01

    Graduate student members of the University of Hawaii Geophysical Society have designed a small-scale model research vessel (R/V) that uses sonar to create 3D maps of a model seafloor in real-time. A pilot project was presented to the public at the School of Ocean and Earth Science and Technology's (SOEST) Biennial Open House weekend in 2013 and, with financial support from the Society of Exploration Geophysicists and National Science Foundation, was developed into a full exhibit for the same event in 2015. Nearly 8,000 people attended the two-day event, including children and teachers from Hawaii's schools, home school students, community groups, families, and science enthusiasts. Our exhibit demonstrates real-time sonar mapping of a cardboard volcano using a toy size research vessel on a programmable 2-dimensional model ship track suspended above a model seafloor. Ship waypoints were wirelessly sent from a Windows Surface tablet to a large-touchscreen PC that controlled the exhibit. Sound wave travel times were recorded using an ultrasonic emitter/receiver attached to an Arduino microcontroller platform and streamed through a USB connection to the control PC running MatLab, where a 3D model was updated as the ship collected data. Our exhibit demonstrates the practical use of complicated concepts, like wave physics, survey design, and data processing in a way that the youngest elementary students are able to understand. It provides an accessible avenue to learn about sonar mapping, and could easily be adapted to talk about bat and marine mammal echolocation by replacing the model ship and volcano. The exhibit received an overwhelmingly positive response from attendees and incited discussions that covered a broad range of earth science topics.

  20. Effect of Demonstration Method of Teaching on Students' Achievement in Agricultural Science

    Science.gov (United States)

    Daluba, Noah Ekeyi

    2013-01-01

    The study investigated the effect of demonstration method of teaching on students' achievement in agricultural science in secondary school in Kogi East Education Zone of Kogi State. Two research questions and one hypothesis guided the study. The study employed a quasi-experimental research design. The population for the study was 18225 senior…

  1. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  2. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  3. ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS

    Directory of Open Access Journals (Sweden)

    Maria Kyrarini

    2017-08-01

    Full Text Available Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW and Gaussian Mixture Model (GMM, the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR from the learned Gaussian mixture model.  In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully.

  4. A mechanistic model for electricity consumption on dairy farms: Definition, validation, and demonstration

    NARCIS (Netherlands)

    Upton, J.R.; Murphy, M.; Shallo, L.; Groot Koerkamp, P.W.G.; Boer, de I.J.M.

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on

  5. Analysis of selected structures for model-based measuring methods using fuzzy logic

    Energy Technology Data Exchange (ETDEWEB)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S. [Hochschule fuer Technik, Wirtschaft und Sozialwesen Zittau/Goerlitz (FH), Zittau (DE). Inst. fuer Prozesstechnik, Prozessautomatisierung und Messtechnik e.V. (IPM)

    2000-07-01

    Monitoring and diagnosis of safety-related technical processes in nuclear enginering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  6. Analysis of selected structures for model-based measuring methods using fuzzy logic

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S.

    2000-01-01

    Monitoring and diagnosis of safety-related technical processes in nuclear engineering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  7. Comparison of methods for the analysis of relatively simple mediation models.

    Science.gov (United States)

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  8. A qualitative model construction method of nuclear power plants for effective diagnostic knowledge generation

    International Nuclear Information System (INIS)

    Yoshikawa, Shinji; Endou, Akira; Kitamura, Yoshinobu; Sasajima, Munehiko; Ikeda, Mitsuru; Mizoguchi, Riichiro.

    1994-01-01

    This paper discusses a method to construct a qualitative model of a nuclear power plant, in order to generate effective diagnostic knowledge. The proposed method is to prepare deep knowledge to be provided to a knowledge compiler based upon qualitative reasoning (QR). Necessity of knowledge compilation for nuclear plant diagnosis will be explained first, and conventionally-experienced problems in qualitative reasoning and a proposed method to overcome this problem is shown next, then a sample procedure to build a qualitative nuclear plant model is demonstrated. (author)

  9. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  10. New method for studying the microscopic foundations of the interacting boson model

    International Nuclear Information System (INIS)

    Klein, A.; Vallieres, M.

    1981-01-01

    We describe (i) a mapping, using a multishell seniority basis, from a prescribed subspace of a shell model space to an associated boson space. (ii) A new dynamical procedure for selecting the collective variables within the boson space, based on the invariance of the trace. (iii) A comparison with exact calculations for a multi-level pairing model, to demonstrate that the method works. (orig.)

  11. Character expansion methods for matrix models of dually weighted graphs

    International Nuclear Information System (INIS)

    Kazakov, V.A.; Staudacher, M.; Wynter, T.

    1996-01-01

    We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphs possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problem of phase transitions from random to flat lattices. (orig.). With 4 figs

  12. Application of the dual reciprocity boundary element method for numerical modelling of solidification process

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2008-12-01

    Full Text Available The dual reciprocity boundary element method is applied for numerical modelling of solidification process. This variant of the BEM is connected with the transformation of the domain integral to the boundary integrals. In the paper the details of the dual reciprocity boundary element method are presented and the usefulness of this approach to solidification process modelling is demonstrated. In the final part of the paper the examples of computations are shown.

  13. Extrapolation method in the Monte Carlo Shell Model and its applications

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-01-01

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.

  14. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1985-01-01

    This paper summarizes a demonstration uncertainty/sensitivity analysis performed on the reactor accident consequence model CRAC2. The study was performed with uncertainty/sensitivity analysis techniques compiled as part of the MELCOR program. The principal objectives of the study were: 1) to demonstrate the use of the uncertainty/sensitivity analysis techniques on a health and economic consequence model, 2) to test the computer models which implement the techniques, 3) to identify possible difficulties in performing such an analysis, and 4) to explore alternative means of analyzing, displaying, and describing the results. Demonstration of the applicability of the techniques was the motivation for performing this study; thus, the results should not be taken as a definitive uncertainty analysis of health and economic consequences. Nevertheless, significant insights on health and economic consequence analysis can be drawn from the results of this type of study. Latin hypercube sampling (LHS), a modified Monte Carlo technique, was used in this study. LHS generates a multivariate input structure in which all the variables of interest are varied simultaneously and desired correlations between variables are preserved. LHS has been shown to produce estimates of output distribution functions that are comparable with results of larger random samples

  15. A systematic composite service design modeling method using graph-based theory.

    Science.gov (United States)

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  16. Review of various dynamic modeling methods and development of an intuitive modeling method for dynamic systems

    International Nuclear Information System (INIS)

    Shin, Seung Ki; Seong, Poong Hyun

    2008-01-01

    Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables

  17. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  18. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  19. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  20. Effectiveness of Demonstration and Lecture Methods in Learning Concept in Economics among Secondary School Students in Borno State, Nigeria

    Science.gov (United States)

    Muhammad, Amin Umar; Bala, Dauda; Ladu, Kolomi Mutah

    2016-01-01

    This study investigated the Effectiveness of Demonstration and Lecture Methods in Learning concepts in Economics among Secondary School Students in Borno state, Nigeria. Five objectives: to determine the effectiveness of demonstration method in learning economics concepts among secondary school students in Borno state, determine the effectiveness…

  1. Demonstration Exercise of a Validated Sample Collection Method for Powders Suspected of Being Biological Agents in Georgia 2006

    International Nuclear Information System (INIS)

    Marsh, B.

    2007-01-01

    August 7, 2006 the state of Georgia conducted a collaborative sampling exercise between the Georgia National Guard 4th Civil Support Team Weapons of Mass Destruction (CST-WMD) and the Georgia Department of Human Resources Division of Public Health demonstrating a recently validated bulk powder sampling method. The exercise was hosted at the Federal Law Enforcement Training Center (FLETC) at Glynn County, Georgia and involved the participation of the Georgia Emergency Management Agency (GEMA), Georgia National Guard, Georgia Public Health Laboratories, the Federal Bureau of Investigation Atlanta Office, Georgia Coastal Health District, and the Glynn County Fire Department. The purpose of the exercise was to demonstrate a recently validated national sampling standard developed by the American Standards and Test Measures (ASTM) International; ASTM E2458 S tandard Practice for Bulk Sample Collection and Swab Sample Collection of Visible Powders Suspected of Being Biological Agents from Nonporous Surfaces . The intent of the exercise was not to endorse the sampling method, but to develop a model for exercising new sampling methods in the context of existing standard operating procedures (SOPs) while strengthening operational relationships between response teams and analytical laboratories. The exercise required a sampling team to respond real-time to an incident cross state involving a clandestine bio-terrorism production lab found within a recreational vehicle (RV). Sample targets consisted of non-viable gamma irradiated B. anthracis Sterne spores prepared by Dugway Proving Ground. Various spore concentration levels were collected by the ASTM method, followed by on- and off-scene analysis utilizing the Center for Disease Control (CDC) Laboratory Response Network (LRN) and National Guard Bureau (NGB) CST mobile Analytical Laboratory Suite (ALS) protocols. Analytical results were compared and detailed surveys of participant evaluation comments were examined. I will

  2. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Pellet Cladding Mechanical Interaction Modeling Using the Extended Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, Benjamin W.; Jiang, Wen; Dolbow, John E.; Peco, Christian

    2016-09-01

    As a brittle material, the ceramic UO2 used as light water reactor fuel experiences significant fracturing throughout its life, beginning with the first rise to power of fresh fuel. This has multiple effects on the thermal and mechanical response of the fuel/cladding system. One such effect that is particularly important is that when there is mechanical contact between the fuel and cladding, cracks that extending from the outer surface of the fuel into the volume of the fuel cause elevated stresses in the adjacent cladding, which can potentially lead to cladding failure. Modeling the thermal and mechanical response of the cladding in the vicinity of these surface-breaking cracks in the fuel can provide important insights into this behavior to help avoid operating conditions that could lead to cladding failure. Such modeling has traditionally been done in the context of finite-element-based fuel performance analysis by modifying the fuel mesh to introduce discrete cracks. While this approach is effective in capturing the important behavior at the fuel/cladding interface, there are multiple drawbacks to explicitly incorporating the cracks in the finite element mesh. Because the cracks are incorporated in the original mesh, the mesh must be modified for cracks of specified location and depth, so it is difficult to account for crack propagation and the formation of new cracks at other locations. The extended finite element method (XFEM) has emerged in recent years as a powerful method to represent arbitrary, evolving, discrete discontinuities within the context of the finite element method. Development work is underway by the authors to implement XFEM in the BISON fuel performance code, and this capability has previously been demonstrated in simulations of fracture propagation in ceramic nuclear fuel. These preliminary demonstrations have included only the fuel, and excluded the cladding for simplicity. This paper presents initial results of efforts to apply XFEM to

  4. Performance-based parameter tuning method of model-driven PID control systems.

    Science.gov (United States)

    Zhao, Y M; Xie, W F; Tu, X W

    2012-05-01

    In this paper, performance-based parameter tuning method of model-driven Two-Degree-of-Freedom PID (MD TDOF PID) control system has been proposed to enhance the control performances of a process. Known for its ability of stabilizing the unstable processes, fast tracking to the change of set points and rejecting disturbance, the MD TDOF PID has gained research interest recently. The tuning methods for the reported MD TDOF PID are based on internal model control (IMC) method instead of optimizing the performance indices. In this paper, an Integral of Time Absolute Error (ITAE) zero-position-error optimal tuning and noise effect minimizing method is proposed for tuning two parameters in MD TDOF PID control system to achieve the desired regulating and disturbance rejection performance. The comparison with Two-Degree-of-Freedom control scheme by modified smith predictor (TDOF CS MSP) and the designed MD TDOF PID tuned by the IMC tuning method demonstrates the effectiveness of the proposed tuning method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  6. 40 CFR 63.9915 - What test methods and other procedures must I use to demonstrate initial compliance with dioxin...

    Science.gov (United States)

    2010-07-01

    ... must I use to demonstrate initial compliance with dioxin/furan emission limits? 63.9915 Section 63.9915....9915 What test methods and other procedures must I use to demonstrate initial compliance with dioxin... limit for dioxins/furans in Table 1 to this subpart, you must follow the test methods and procedures...

  7. Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction.

    Science.gov (United States)

    Luo, Gang

    2016-01-01

    Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy, but suffer from a long-standing open problem precluding their widespread use in healthcare. Most machine learning models give no explanation for their prediction results, whereas interpretability is essential for a predictive model to be adopted in typical healthcare settings. This paper presents the first complete method for automatically explaining results for any machine learning predictive model without degrading accuracy. We did a computer coding implementation of the method. Using the electronic medical record data set from the Practice Fusion diabetes classification competition containing patient records from all 50 states in the United States, we demonstrated the method on predicting type 2 diabetes diagnosis within the next year. For the champion machine learning model of the competition, our method explained prediction results for 87.4 % of patients who were correctly predicted by the model to have type 2 diabetes diagnosis within the next year. Our demonstration showed the feasibility of automatically explaining results for any machine learning predictive model without degrading accuracy.

  8. Three-dimensional one-way bubble tracking method for the prediction of developing bubble-slug flows in a vertical pipe. 1st report, models and demonstration

    International Nuclear Information System (INIS)

    Tamai, Hidesada; Tomiyama, Akio

    2004-01-01

    A three-dimensional one-way bubble tracking method is one of the most promising numerical methods for the prediction of a developing bubble flow in a vertical pipe, provided that several constitutive models are prepared. In this study, a bubble shape, an equation of bubble motion, a liquid velocity profile, a pressure field, turbulent fluctuation and bubble coalescence are modeled based on available knowledge on bubble dynamics. Bubble shapes are classified into four types in terms of bubble equivalent diameter. A wake velocity model is introduced to simulate approaching process among bubbles due to wake entrainment. Bubble coalescence is treated as a stochastic phenomenon with the aid of coalescence probabilities that depend on the sizes of two interacting bubbles. The proposed method can predict time-spatial evolution of flow pattern in a developing bubble-slug flow. (author)

  9. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    Malin, Wahlberg; Imre, Pazsit

    2005-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  11. ADOxx Modelling Method Conceptualization Environment

    Directory of Open Access Journals (Sweden)

    Nesat Efendioglu

    2017-04-01

    Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.

  12. Modeling of Solid State Transformer for the FREEDM System Demonstration

    Science.gov (United States)

    Jiang, Youyuan

    The Solid State Transformer (SST) is an essential component in the FREEDM system. This research focuses on the modeling of the SST and the controller hardware in the loop (CHIL) implementation of the SST for the support of the FREEDM system demonstration. The energy based control strategy for a three-stage SST is analyzed and applied. A simplified average model of the three-stage SST that is suitable for simulation in real time digital simulator (RTDS) has been developed in this study. The model is also useful for general time-domain power system analysis and simulation. The proposed simplified av-erage model has been validated in MATLAB and PLECS. The accuracy of the model has been verified through comparison with the cycle-by-cycle average (CCA) model and de-tailed switching model. These models are also implemented in PSCAD, and a special strategy to implement the phase shift modulation has been proposed to enable the switching model simulation in PSCAD. The implementation of the CHIL test environment of the SST in RTDS is described in this report. The parameter setup of the model has been discussed in detail. One of the dif-ficulties is the choice of the damping factor, which is revealed in this paper. Also the grounding of the system has large impact on the RTDS simulation. Another problem is that the performance of the system is highly dependent on the switch parameters such as voltage and current ratings. Finally, the functionalities of the SST have been realized on the platform. The distributed energy storage interface power injection and reverse power flow have been validated. Some limitations are noticed and discussed through the simulation on RTDS.

  13. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  14. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  15. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  16. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  17. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  18. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  19. Research and Demonstration of‘Double-chain’Eco-agricultural Model Standardization and Industrialization

    Directory of Open Access Journals (Sweden)

    ZHANG Jia-hong

    2015-04-01

    Full Text Available According to agricultural resource endowment of Jiangsu Province, this paper created kinds of double-chain eco-agricultural model and integrated supporting system based on 'waterfowl, marine lives, aquatic vegetable and paddy rice', 'special food and economic crops with livestock’and‘special food and economic crops with livestock and marine lives’, which were suitable for extension and application in Jiangsu Province. Besides, it set 12 provincial standards and established preliminary technical standard system of‘double-chain’eco-agricultural model. In addition, it explored that‘the leading agricultural enterprises (agricultural co-operatives or family farms+demonstration zones+farmer households’was adopted as operating mechanism of industrialization of eco-agricultural model, which pushed forward rapid development of standardization and industrialization of‘double-chain’eco-agricultural model.

  20. A Novel Parametric Modeling Method and Optimal Design for Savonius Wind Turbines

    Directory of Open Access Journals (Sweden)

    Baoshou Zhang

    2017-03-01

    Full Text Available Under the inspiration of polar coordinates, a novel parametric modeling and optimization method for Savonius wind turbines was proposed to obtain the highest power output, in which a quadratic polynomial curve was bent to describe a blade. Only two design parameters are needed for the shape-complicated blade. Therefore, this novel method reduces sampling scale. A series of transient simulations was run to get the optimal performance coefficient (power coefficient C p for different modified turbines based on computational fluid dynamics (CFD method. Then, a global response surface model and a more precise local response surface model were created according to Kriging Method. These models defined the relationship between optimization objective Cp and design parameters. Particle swarm optimization (PSO algorithm was applied to find the optimal design based on these response surface models. Finally, the optimal Savonius blade shaped like a “hook” was obtained. Cm (torque coefficient, Cp and flow structure were compared for the optimal design and the classical design. The results demonstrate that the optimal Savonius turbine has excellent comprehensive performance. The power coefficient Cp is significantly increased from 0.247 to 0.262 (6% higher. The weight of the optimal blade is reduced by 17.9%.

  1. Demonstration recommendations for accelerated testing of concrete decontamination methods

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.

    1995-12-01

    A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.

  2. Demonstration recommendations for accelerated testing of concrete decontamination methods

    International Nuclear Information System (INIS)

    Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.

    1995-12-01

    A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are 137 Cs, 238 U (and its daughters), 60 Co, 90 Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 x 10 8 ft 2 or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling

  3. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  4. Demonstration of the improved PID method for the accurate temperature control of ADRs

    International Nuclear Information System (INIS)

    Shinozaki, K.; Hoshino, A.; Ishisaki, Y.; Mihara, T.

    2006-01-01

    Microcalorimeters require extreme stability (-bar 10μK) of thermal bath at low temperature (∼100mK). We have developed a portable adiabatic demagnetization refrigerator (ADR) system for ground experiments with TES microcalorimeters, in which we observed residual temperature between aimed and measured values when magnet current was controlled with the standard Proportional, Integral, and Derivative control (PID) method. The difference increases in time as the magnet current decreases. This phenomenon can be explained by the theory of the magnetic cooling, and we have introduced a new functional parameter to improve the PID method. With this improvement, long-term stability of the ADR temperature about 10μK rms is obtained up to the period of ∼15ks down to almost zero magnet current. We briefly describe our ADR system and principle of the improved PID method, showing the temperature control result. It is demonstrated that the controlled time of the aimed temperature can be extended by about 30% longer than the standard PID method in our system. The improved PID method is considered to be of great advantage especially in the range of small magnet current

  5. Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines

    Science.gov (United States)

    2017-05-01

    Engineering Chemistry Fundamentals, Vol. 5, No. 3, 1966, pp. 356–363. [14] Burns, R. A., Development of scalar and velocity imaging diagnostics...in an Aero- Engine Model Combustor at Elevated Pressure Using URANS and Finite- Rate Chemistry ,” 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference...FINAL REPORT Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines SERDP Project WP-2151

  6. An Online Method for Interpolating Linear Parametric Reduced-Order Models

    KAUST Repository

    Amsallem, David; Farhat, Charbel

    2011-01-01

    A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.

  7. A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods

    Directory of Open Access Journals (Sweden)

    Jucheng Yang

    2018-04-01

    Full Text Available Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs and canonical correlation analysis (CCA methods. The model, which has a symmetric structure, is found to have high potential for multimodal biometrics. The model works as follows. First, it learns the hidden-layer representation of biological images using extreme learning machines layer by layer. Second, the canonical correlation analysis method is applied to map the representation to a feature space, which is used to reconstruct the multimodal image feature representation. Third, the reconstructed features are used as the input of a classifier for supervised training and output. To verify the validity and efficiency of the method, we adopt it for new hybrid datasets obtained from typical face image datasets and finger-vein image datasets. Our experimental results demonstrate that our model performs better than traditional methods.

  8. Modifying conjoint methods to model managers' reactions to business environmental trends : an application to modeling retailer reactions to sales trends

    NARCIS (Netherlands)

    Oppewal, H.; Louviere, J.J.; Timmermans, H.J.P.

    2000-01-01

    This article proposes and demonstrates how conjoint methods can be adapted to allow the modeling of managerial reactions to various changes in economic and competitive environments and their effects on observed sales levels. Because in general micro-level data on strategic decision making over time

  9. A mechanistic model for electricity consumption on dairy farms: definition, validation, and demonstration.

    Science.gov (United States)

    Upton, J; Murphy, M; Shalloo, L; Groot Koerkamp, P W G; De Boer, I J M

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical data of 1yr on commercial spring calving, grass-based dairy farms with 45, 88, and 195 milking cows; and (3) demonstrated the functionality of the model by applying 2 electricity tariffs to the electricity consumption data and examining the effect on total dairy farm electricity costs. The MECD was developed using a mechanistic modeling approach and required the key inputs of milk production, cow number, and details relating to the milk-cooling system, milking machine system, water-heating system, lighting systems, water pump systems, and the winter housing facilities as well as details relating to the management of the farm (e.g., season of calving). Model validation showed an overall relative prediction error (RPE) of less than 10% for total electricity consumption. More than 87% of the mean square prediction error of total electricity consumption was accounted for by random variation. The RPE values of the milk-cooling systems, water-heating systems, and milking machine systems were less than 20%. The RPE values for automatic scraper systems, lighting systems, and water pump systems varied from 18 to 113%, indicating a poor prediction for these metrics. However, automatic scrapers, lighting, and water pumps made up only 14% of total electricity consumption across all farms, reducing the overall impact of these poor predictions. Demonstration of the model showed that total farm electricity costs increased by between 29 and 38% by moving from a day and night tariff to a flat

  10. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  11. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  12. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  13. Modern Methods for Modeling Change in Obesity Research in Nursing.

    Science.gov (United States)

    Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E

    2017-08-01

    Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.

  14. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  15. Development of a formalism of movable cellular automaton method for numerical modeling of fracture of heterogeneous elastic-plastic materials

    Directory of Open Access Journals (Sweden)

    S. Psakhie

    2013-04-01

    Full Text Available A general approach to realization of models of elasticity, plasticity and fracture of heterogeneous materials within the framework of particle-based numerical methods is proposed in the paper. It is based on building many-body forces of particle interaction, which provide response of particle ensemble correctly conforming to the response (including elastic-plastic behavior and fracture of simulated solids. Implementation of proposed approach within particle-based methods is demonstrated by the example of the movable cellular automaton (MCA method, which integrates the possibilities of particle-based discrete element method (DEM and cellular automaton methods. Emergent advantages of the developed approach to formulation of many-body interaction are discussed. Main of them are its applicability to various realizations of the concept of discrete elements and a possibility to realize various rheological models (including elastic-plastic or visco-elastic-plastic and models of fracture to study deformation and fracture of solid-phase materials and media. Capabilities of particle-based modeling of heterogeneous solids are demonstrated by the problem of simulation of deformation and fracture of particle-reinforced metal-ceramic composites.

  16. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  17. Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model.

    Science.gov (United States)

    Bi, Size; Liang, Xiao; Huang, Ting-Lei

    2016-01-01

    Word embedding, a lexical vector representation generated via the neural linguistic model (NLM), is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.

  18. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  19. [Method of immunocytochemical demonstration of cholinergic neurons in the central nervous system of laboratory animals].

    Science.gov (United States)

    Korzhevskiĭ, D E; Grigor'ev, I P; Kirik, O V; Zelenkova, N M; Sukhorukova, E G

    2013-01-01

    A protocol of immunocytochemical demonstration of choline acetyltransferase (ChAT), a key enzyme of acetylcholine synthesis, in paraffin sections of the brain of some laboratory animals, is presented. The method is simple, gives fairly reproducible results and allows for demonstration of ChAT in neurons, nerve fibers, and terminals in preparations of at least three species of laboratory animals including rat, rabbit, and cat. Different kinds of fixation (10% formalin, 4% paraformaldehyde, or zinc-ethanol-formaldehyde) were found suitable for immunocytochemical visualization of ChAT, however, optimal results were obtained with the application of zinc-ethanol-formaldehyde

  20. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  1. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  2. A fuzzy inventory model with acceptable shortage using graded mean integration value method

    Science.gov (United States)

    Saranya, R.; Varadarajan, R.

    2018-04-01

    In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.

  3. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    Science.gov (United States)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  4. A method for physically based model analysis of conjunctive use in response to potential climate changes

    Science.gov (United States)

    Hanson, R.T.; Flint, L.E.; Flint, A.L.; Dettinger, M.D.; Faunt, C.C.; Cayan, D.; Schmid, W.

    2012-01-01

    Potential climate change effects on aspects of conjunctive management of water resources can be evaluated by linking climate models with fully integrated groundwater-surface water models. The objective of this study is to develop a modeling system that links global climate models with regional hydrologic models, using the California Central Valley as a case study. The new method is a supply and demand modeling framework that can be used to simulate and analyze potential climate change and conjunctive use. Supply-constrained and demand-driven linkages in the water system in the Central Valley are represented with the linked climate models, precipitation-runoff models, agricultural and native vegetation water use, and hydrologic flow models to demonstrate the feasibility of this method. Simulated precipitation and temperature were used from the GFDL-A2 climate change scenario through the 21st century to drive a regional water balance mountain hydrologic watershed model (MHWM) for the surrounding watersheds in combination with a regional integrated hydrologic model of the Central Valley (CVHM). Application of this method demonstrates the potential transition from predominantly surface water to groundwater supply for agriculture with secondary effects that may limit this transition of conjunctive use. The particular scenario considered includes intermittent climatic droughts in the first half of the 21st century followed by severe persistent droughts in the second half of the 21st century. These climatic droughts do not yield a valley-wide operational drought but do cause reduced surface water deliveries and increased groundwater abstractions that may cause additional land subsidence, reduced water for riparian habitat, or changes in flows at the Sacramento-San Joaquin River Delta. The method developed here can be used to explore conjunctive use adaptation options and hydrologic risk assessments in regional hydrologic systems throughout the world.

  5. A mechanistic model for electricity consumption on dairy farms: Definition, validation, and demonstration

    OpenAIRE

    Upton, J.R.; Murphy, M.; Shallo, L.; Groot Koerkamp, P.W.G.; Boer, de, I.J.M.

    2014-01-01

    Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical d...

  6. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  7. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    NARCIS (Netherlands)

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  8. The geometric background-field method, renormalization and the Wess-Zumino term in non-linear sigma-models

    International Nuclear Information System (INIS)

    Mukhi, S.

    1986-01-01

    A simple recursive algorithm is presented which generates the reparametrization-invariant background-field expansion for non-linear sigma-models on manifolds with an arbitrary riemannian metric. The method is also applicable to Wess-Zumino terms and to counterterms. As an example, the general-metric model is expanded to sixth order and compared with previous results. For locally symmetric spaces, we actually obtain a general formula for the nth order term. The method is shown to facilitate the study of models with Wess-Zumino terms. It is demonstrated that, for chiral models, the Wess-Zumino term is unrenormalized to all orders in perturbation theory even when the model is not conformally invariant. (orig.)

  9. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  10. Near-point string: Simple method to demonstrate anticipated near point for multifocal and accommodating intraocular lenses.

    Science.gov (United States)

    George, Monica C; Lazer, Zane P; George, David S

    2016-05-01

    We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Modeling laser beam diffraction and propagation by the mode-expansion method.

    Science.gov (United States)

    Snyder, James J

    2007-08-01

    In the mode-expansion method for modeling propagation of a diffracted beam, the beam at the aperture can be expanded as a weighted set of orthogonal modes. The parameters of the expansion modes are chosen to maximize the weighting coefficient of the lowest-order mode. As the beam propagates, its field distribution can be reconstructed from the set of weighting coefficients and the Gouy phase of the lowest-order mode. We have developed a simple procedure to implement the mode-expansion method for propagation through an arbitrary ABCD matrix, and we have demonstrated that it is accurate in comparison with direct calculations of diffraction integrals and much faster.

  12. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  13. MATHEMATICAL MODEL OF THE RHEOLOGICAL BEHAVIOR OF VISCOPLASTIC FLUID, WHICH DEMONSTRATES THE EFFECT OF “SOLIDIFICATION”

    Directory of Open Access Journals (Sweden)

    V. N. Kolodezhnov

    2014-01-01

    Full Text Available Summary. The irregular behavior of some kinds of suspensions on the basis of polymeric compositions and fine-dispersed fractions is characterized. As a simple, one-dimensional, shearing, viscometric flow such materials demonstrate the following mechanical behavior. There is no deformation if the shear stress does not exceed a certain critical value. If this critical value is exceeded, the flow is begins. This behavior is well-known and corresponds to the rheological models of viscoplastic fluid. However, further increase in the shear rate results in “solidification”. The rheological model of such viscoplastic fluids, mechanical behavior demonstrating the “solidification” effect is offered . This model contains four empirical parameters. The impact of the exponent on the dependence of the shearing stress and effective viscosity on the shear rate in the rheological model is graphically presented. The rheological model extrapolation on the three-dimensional flow is proposed.

  14. Mathematical Modeling and Simulation of SWRO Process Based on Simultaneous Method

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available Reverse osmosis (RO technique is one of the most efficient ways for seawater desalination to solve the shortage of freshwater. For prediction and analysis of the performance of seawater reverse osmosis (SWRO process, an accurate and detailed model based on the solution-diffusion and mass transfer theory is established. Since the accurate formulation of the model includes many differential equations and strong nonlinear equations (differential and algebraic equations, DAEs, to solve the problem efficiently, the simultaneous method through orthogonal collocation on finite elements and large scale solver were used to obtain the solutions. The model was fully discretized into NLP (nonlinear programming with large scale variables and equations, and then the NLP was solved by large scale solver of IPOPT. Validation of the formulated model and solution method is verified by case study on a SWRO plant. Then simulation and analysis are carried out to demonstrate the performance of reverse osmosis process; operational conditions such as feed pressure and feed flow rate as well as feed temperature are also analyzed. This work is of significant meaning for the detailed understanding of RO process and future energy saving through operational optimization.

  15. Hidden Markov models for sequence analysis: extension and analysis of the basic method

    DEFF Research Database (Denmark)

    Hughey, Richard; Krogh, Anders Stærmose

    1996-01-01

    -maximization training procedure is relatively straight-forward. In this paper,we review the mathematical extensions and heuristics that move the method from the theoreticalto the practical. Then, we experimentally analyze the effectiveness of model regularization,dynamic model modification, and optimization strategies......Hidden Markov models (HMMs) are a highly effective means of modeling a family of unalignedsequences or a common motif within a set of unaligned sequences. The trained HMM can then beused for discrimination or multiple alignment. The basic mathematical description of an HMMand its expectation....... Finally it is demonstrated on the SH2domain how a domain can be found from unaligned sequences using a special model type. Theexperimental work was completed with the aid of the Sequence Alignment and Modeling softwaresuite....

  16. Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model

    Directory of Open Access Journals (Sweden)

    Size Bi

    2016-01-01

    Full Text Available Word embedding, a lexical vector representation generated via the neural linguistic model (NLM, is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.

  17. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  18. A method for extending stage-discharge relationships using a hydrodynamic model and quantifying the associated uncertainty

    Science.gov (United States)

    Shao, Quanxi; Dutta, Dushmanta; Karim, Fazlul; Petheram, Cuan

    2018-01-01

    Streamflow discharge is a fundamental dataset required to effectively manage water and land resources. However, developing robust stage - discharge relationships called rating curves, from which streamflow discharge is derived, is time consuming and costly, particularly in remote areas and especially at high stage levels. As a result stage - discharge relationships are often heavily extrapolated. Hydrodynamic (HD) models are physically based models used to simulate the flow of water along river channels and over adjacent floodplains. In this paper we demonstrate a method by which a HD model can be used to generate a 'synthetic' stage - discharge relationship at high stages. The method uses a both-side Box-Cox transformation to calibrate the synthetic rating curve such that the regression residuals are as close to the normal distribution as possible. By doing this both-side transformation, the statistical uncertainty in the synthetically derived stage - discharge relationship can be calculated. This enables people trying to make decisions to determine whether the uncertainty in the synthetically generated rating curve at high stage levels is acceptable for their decision. The proposed method is demonstrated in two streamflow gauging stations in north Queensland, Australia.

  19. Modified Levenberg-Marquardt Method for RÖSSLER Chaotic System Fuzzy Modeling Training

    Science.gov (United States)

    Wang, Yu-Hui; Wu, Qing-Xian; Jiang, Chang-Sheng; Xue, Ya-Li; Fang, Wei

    Generally, fuzzy approximation models require some human knowledge and experience. Operator's experience is involved in the mathematics of fuzzy theory as a collection of heuristic rules. The main goal of this paper is to present a new method for identifying unknown nonlinear dynamics such as Rössler system without any human knowledge. Instead of heuristic rules, the presented method uses the input-output data pairs to identify the Rössler chaotic system. The training algorithm is a modified Levenberg-Marquardt (L-M) method, which can adjust the parameters of each linear polynomial and fuzzy membership functions on line, and do not rely on experts' experience excessively. Finally, it is applied to training Rössler chaotic system fuzzy identification. Comparing this method with the standard L-M method, the convergence speed is accelerated. The simulation results demonstrate the effectiveness of the proposed method.

  20. A High-Resolution Terrestrial Modeling System (TMS): A Demonstration in China

    Science.gov (United States)

    Duan, Q.; Dai, Y.; Zheng, X.; Ye, A.; Ji, D.; Chen, Z.

    2013-12-01

    This presentation describes a terrestrial modeling system (TMS) developed at Beijing Normal University. The TMS is designed to be driven by multi-sensor meteorological and land surface observations, including those from satellites and land based observing stations. The purposes of the TMS are (1) to provide a land surface parameterization scheme fully capable of being coupled with the Earth system models; (2) to provide a standalone platform for retrospective historical simulation and for forecasting of future land surface processes at different space and time scales; and (3) to provide a platform for studying human-Earth system interactions and for understanding climate change impacts. This system is built on capabilities among several groups at BNU, including the Common Land Model (CoLM) system, high-resolution atmospheric forcing data sets, high resolution land surface characteristics data sets, data assimilation and uncertainty analysis platforms, ensemble prediction platform, and high-performance computing facilities. This presentation intends to describe the system design and demonstrate the capabilities of TMS with results from a China-wide application.

  1. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  2. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  3. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  4. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    Energy Technology Data Exchange (ETDEWEB)

    Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-03

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources. We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.

  5. Strategy Guideline: Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Savage, C.; Hunt, A.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  6. Strategy Guideline. Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, A.; Savage, C.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  7. FE Model Updating on an In-Service Self-Anchored Suspension Bridge with Extra-Width Using Hybrid Method

    Directory of Open Access Journals (Sweden)

    Zhiyuan Xia

    2017-02-01

    Full Text Available Nowadays, many more bridges with extra-width have been needed for vehicle throughput. In order to obtain a precise finite element (FE model of those complex bridge structures, the practical hybrid updating method by integration of Gaussian mutation particle swarm optimization (GMPSO, Kriging meta-model and Latin hypercube sampling (LHS was proposed. By demonstrating the efficiency and accuracy of the hybrid method through the model updating of a damaged simply supported beam, the proposed method was applied to the model updating of a self-anchored suspension bridge with extra-width which showed great necessity considering the results of ambient vibration test. The results of bridge model updating showed that both of the mode frequencies and shapes had relatively high agreement between the updated model and experimental structure. The successful model updating of this bridge fills in the blanks of model updating of a complex self-anchored suspension bridge. Moreover, the updating process enables other model updating issues for complex bridge structures

  8. COGNITIVE MODELING AS A METHOD OF QUALITATIVE ANALYSIS OF IT PROJECTS

    Directory of Open Access Journals (Sweden)

    Інна Ігорівна ОНИЩЕНКО

    2016-03-01

    Full Text Available The example project implementing automated CRM-system demonstrated the possibility and features of cognitive modeling in the qualitative analysis of project risks to determine their additional features. Proposed construction of cognitive models of project risks in information technology within the qualitative risk analysis, additional assessments as a method of ranking risk to characterize the relationship between them. The proposed cognitive model reflecting the relationship between the risk of IT project to assess the negative and the positive impact of certain risks for the remaining risks of project implementation of the automated CRM-system. The ability to influence the risk of a fact of other project risks can increase the priority of risk with low impact on results due to its relationship with other project risks.

  9. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  10. Computer programs of information processing of nuclear physical methods as a demonstration material in studying nuclear physics and numerical methods

    Science.gov (United States)

    Bateev, A. B.; Filippov, V. P.

    2017-01-01

    The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.

  11. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  12. An application of multigrid methods for a discrete elastic model for epitaxial systems

    International Nuclear Information System (INIS)

    Caflisch, R.E.; Lee, Y.-J.; Shu, S.; Xiao, Y.-X.; Xu, J.

    2006-01-01

    We apply an efficient and fast algorithm to simulate the atomistic strain model for epitaxial systems, recently introduced by Schindler et al. [Phys. Rev. B 67, 075316 (2003)]. The discrete effects in this lattice statics model are crucial for proper simulation of the influence of strain for thin film epitaxial growth, but the size of the atomistic systems of interest is in general quite large and hence the solution of the discrete elastic equations is a considerable numerical challenge. In this paper, we construct an algebraic multigrid method suitable for efficient solution of the large scale discrete strain model. Using this method, simulations are performed for several representative physical problems, including an infinite periodic step train, a layered nanocrystal, and a system of quantum dots. The results demonstrate the effectiveness and robustness of the method and show that the method attains optimal convergence properties, regardless of the problem size, the geometry and the physical parameters. The effects of substrate depth and of invariance due to traction-free boundary conditions are assessed. For a system of quantum dots, the simulated strain energy density supports the observations that trench formation near the dots provides strain relief

  13. A practical implicit finite-difference method: examples from seismic modelling

    International Nuclear Information System (INIS)

    Liu, Yang; Sen, Mrinal K

    2009-01-01

    We derive explicit and new implicit finite-difference formulae for derivatives of arbitrary order with any order of accuracy by the plane wave theory where the finite-difference coefficients are obtained from the Taylor series expansion. The implicit finite-difference formulae are derived from fractional expansion of derivatives which form tridiagonal matrix equations. Our results demonstrate that the accuracy of a (2N + 2)th-order implicit formula is nearly equivalent to that of a (6N + 2)th-order explicit formula for the first-order derivative, and (2N + 2)th-order implicit formula is nearly equivalent to (4N + 2)th-order explicit formula for the second-order derivative. In general, an implicit method is computationally more expensive than an explicit method, due to the requirement of solving large matrix equations. However, the new implicit method only involves solving tridiagonal matrix equations, which is fairly inexpensive. Furthermore, taking advantage of the fact that many repeated calculations of derivatives are performed by the same difference formula, several parts can be precomputed resulting in a fast algorithm. We further demonstrate that a (2N + 2)th-order implicit formulation requires nearly the same memory and computation as a (2N + 4)th-order explicit formulation but attains the accuracy achieved by a (6N + 2)th-order explicit formulation for the first-order derivative and that of a (4N + 2)th-order explicit method for the second-order derivative when additional cost of visiting arrays is not considered. This means that a high-order explicit method may be replaced by an implicit method of the same order resulting in a much improved performance. Our analysis of efficiency and numerical modelling results for acoustic and elastic wave propagation validates the effectiveness and practicality of the implicit finite-difference method

  14. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  15. Comparison of marine spatial planning methods in Madagascar demonstrates value of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Thomas F Allnutt

    Full Text Available The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value. The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative

  16. Buried Waste Integrated Demonstration stakeholder involvement model

    International Nuclear Information System (INIS)

    Kaupanger, R.M.; Kostelnik, K.M.; Milam, L.M.

    1994-04-01

    The Buried Waste Integrated Demonstration (BWID) is a program funded by the US Department of Energy (DOE) Office of Technology Development. BWID supports the applied research, development, demonstration, and evaluation of a suite of advanced technologies that together form a comprehensive remediation system for the effective and efficient remediation of buried waste. Stakeholder participation in the DOE Environmental Management decision-making process is critical to remediation efforts. Appropriate mechanisms for communication with the public, private sector, regulators, elected officials, and others are being aggressively pursued by BWID to permit informed participation. This document summarizes public outreach efforts during FY-93 and presents a strategy for expanded stakeholder involvement during FY-94

  17. Analysis of the trajectory surface hopping method from the Markov state model perspective

    International Nuclear Information System (INIS)

    Akimov, Alexey V.; Wang, Linjun; Prezhdo, Oleg V.; Trivedi, Dhara

    2015-01-01

    We analyze the applicability of the seminal fewest switches surface hopping (FSSH) method of Tully to modeling quantum transitions between electronic states that are not coupled directly, in the processes such as Auger recombination. We address the known deficiency of the method to describe such transitions by introducing an alternative definition for the surface hopping probabilities, as derived from the Markov state model perspective. We show that the resulting transition probabilities simplify to the quantum state populations derived from the time-dependent Schrödinger equation, reducing to the rapidly switching surface hopping approach of Tully and Preston. The resulting surface hopping scheme is simple and appeals to the fundamentals of quantum mechanics. The computational approach is similar to the FSSH method of Tully, yet it leads to a notably different performance. We demonstrate that the method is particularly accurate when applied to superexchange modeling. We further show improved accuracy of the method, when applied to one of the standard test problems. Finally, we adapt the derived scheme to atomistic simulation, combine it with the time-domain density functional theory, and show that it provides the Auger energy transfer timescales which are in good agreement with experiment, significantly improving upon other considered techniques. (author)

  18. Flexible-Rotor Balancing Demonstration

    Science.gov (United States)

    Giordano, J.; Zorzi, E.

    1986-01-01

    Report describes method for balancing high-speed rotors at relatively low speeds and discusses demonstration of method on laboratory test rig. Method ensures rotor brought up to speeds well over 20,000 r/min smoothly, without excessive vibration amplitude at critical speeds or at operating speed.

  19. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    Science.gov (United States)

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  20. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Describes two demonstrations to illustrate characteristics of substances. Outlines a method to detect the changes in pH levels during the electrolysis of water. Uses water pistols, one filled with methane gas and the other filled with water, to illustrate the differences in these two substances. (TW)

  1. Utility of a mouse model of osteoarthritis to demonstrate cartilage protection by IFNγ-primed equine mesenchymal stem cells

    Directory of Open Access Journals (Sweden)

    Marie Maumus

    2016-09-01

    Full Text Available Objective. Mesenchymal stem cells isolated from adipose tissue (ASC have been shown to influence the course of osteoarthritis (OA in different animal models and are promising in veterinary medicine for horses involved in competitive sport. The aim of this study was to characterize equine ASCs (eASC and investigate the role of interferon-gamma (IFNγ-priming on their therapeutic effect in a murine model of OA, which could be relevant to equine OA.Methods. ASC were isolated from subcutaneous fat. Expression of specific markers was tested by cytometry and RT-qPCR. Differentiation potential was evaluated by histology and RT-qPCR. For functional assays, naïve or IFNγ-primed eASCs were cocultured with PBMC or articular cartilage explants. Finally, the therapeutic effect of eASCs was tested in the model of collagenase-induced OA in mice (CIOA.Results. The immunosuppressive function of eASCs on equine T cell proliferation and their chondroprotective effect on equine cartilage explants were demonstrated in vitro. Both cartilage degradation and T cell activation were reduced by naïve and IFNγ-primed eASCs but IFNγ-priming enhanced these functions. In CIOA, intra-articular injection of eASCs prevented articular cartilage from degradation and IFNγ-primed eASCs were more potent than naïve cells. This effect was related to the modulation of eASC secretome by IFNγ-priming.Conclusion. IFNγ-priming of eASCs potentiated their antiproliferative and chondroprotective functions. We demonstrated that the immunocompetent mouse model of CIOA was relevant to test the therapeutic efficacy of xenogeneic eASCs for OA and confirmed that IFNγ-primed eASCs may have a therapeutic value for musculoskeletal diseases in veterinary medicine.

  2. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  3. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  4. A pilot Virtual Observatory (pVO) for integrated catchment science - Demonstration of national scale modelling of hydrology and biogeochemistry (Invited)

    Science.gov (United States)

    Freer, J. E.; Bloomfield, J. P.; Johnes, P. J.; MacLeod, C.; Reaney, S.

    2010-12-01

    There are many challenges in developing effective and integrated catchment management solutions for hydrology and water quality issues. Such solutions should ideally build on current scientific evidence to inform policy makers and regulators and additionally allow stakeholders to take ownership of local and/or national issues, in effect bringing together ‘communities of practice’. A strategy being piloted in the UK as the Pilot Virtual Observatory (pVO), funded by NERC, is to demonstrate the use of cyber-infrastructure and cloud computing resources to investigate better methods of linking data and models and to demonstrate scenario analysis for research, policy and operational needs. The research will provide new ways the scientific and stakeholder communities come together to exploit current environmental information, knowledge and experience in an open framework. This poster presents the project scope and methodologies for the pVO work dealing with national modelling of hydrology and macro-nutrient biogeochemistry. We evaluate the strategies needed to robustly benchmark our current predictive capability of these resources through ensemble modelling. We explore the use of catchment similarity concepts to understand if national monitoring programs can inform us about the behaviour of catchments. We discuss the challenges to applying these strategies in an open access and integrated framework and finally we consider the future for such virtual observatory platforms for improving the way we iteratively improve our understanding of catchment science.

  5. Rapid Energy Modeling Workflow Demonstration Project

    Science.gov (United States)

    2014-01-01

    app FormIt for conceptual modeling with further refinement available in Revit or Vasari. Modeling can also be done in Revit (detailed and conceptual...referenced building model while in the field. • Autodesk® Revit is a BIM software application with integrated energy and carbon analyses driven by Green...FormIt, Revit and Vasari, and (3) comparative analysis. The energy results of these building analyses are represented as annual energy use for natural

  6. A comparison of methods for demonstrating artificial bone lesions; conventional versus computer tomography

    International Nuclear Information System (INIS)

    Heller, M.; Wenk, M.; Jend, H.H.

    1984-01-01

    Conventional tomography (T) and computer tomography (CT) were used for examining 97 artificial bone lesions at various sites. The purpose of the study was to determine how far CT can replace T in the diagnosis of skeletal abnormalities. The results have shown that modern CT, particularly in its high resolution form, equals T and provides additional information (substrate of a lesion, its relationship to neighbouring tissues, simultaneous demonstration of soft tissue etc.). These cannot be shown successfully by T. It follows that CT is indicated as the primary method of examination for lesions of the facial skeleton, skull base, spine, pelvis and, to some extent, extremities. (orig.) [de

  7. Modelling Plane Geometry: the connection between Geometrical Visualization and Algebraic Demonstration

    Science.gov (United States)

    Pereira, L. R.; Jardim, D. F.; da Silva, J. M.

    2017-12-01

    The teaching and learning of Mathematics contents have been challenging along the history of the education, both for the teacher, in his dedicated task of teaching, as for the student, in his arduous and constant task of learning. One of the topics that are most discussed in these contents is the difference between the concepts of proof and demonstration. This work presents an interesting discussion about such concepts considering the use of the mathematical modeling approach for teaching, applied to some examples developed in the classroom with a group of students enrolled in the discipline of Geometry of the Mathematics curse of UFVJM.

  8. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations. © 2011 Elsevier B.V.

  9. New modeling method for the dielectric relaxation of a DRAM cell capacitor

    Science.gov (United States)

    Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon

    2018-02-01

    This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.

  10. A Method for Improving Hotspot Directional Signatures in BRDF Models Used for MODIS

    Science.gov (United States)

    Jiao, Ziti; Schaaf, Crystal B.; Dong, Yadong; Roman, Miguel; Hill, Michael J.; Chen, Jing M.; Wang, Zhuosen; Zhang, Hu; Saenz, Edward; Poudyal, Rajesh; hide

    2016-01-01

    The semi-empirical, kernel-driven, linear RossThick-LiSparseReciprocal (RTLSR) Bidirectional Reflectance Distribution Function (BRDF) model is used to generate the routine MODIS BRDFAlbedo product due to its global applicability and the underlying physics. A challenge of this model in regard to surface reflectance anisotropy effects comes from its underestimation of the directional reflectance signatures near the Sun illumination direction; also known as the hotspot effect. In this study, a method has been developed for improving the ability of the RTLSR model to simulate the magnitude and width of the hotspot effect. The method corrects the volumetric scattering component of the RTLSR model using an exponential approximation of a physical hotspot kernel, which recreates the hotspot magnitude and width using two free parameters (C(sub 1) and C(sub 2), respectively). The approach allows one to reconstruct, with reasonable accuracy, the hotspot effect by adjusting or using the prior values of these two hotspot variables. Our results demonstrate that: (1) significant improvements in capturing hotspot effect can be made to this method by using the inverted hotspot parameters; (2) the reciprocal nature allow this method to be more adaptive for simulating the hotspot height and width with high accuracy, especially in cases where hotspot signatures are available; and (3) while the new approach is consistent with the heritage RTLSR model inversion used to estimate intrinsic narrowband and broadband albedos, it presents some differences for vegetation clumping index (CI) retrievals. With the hotspot-related model parameters determined a priori, this method offers improved performance for various ecological remote sensing applications; including the estimation of canopy structure parameters.

  11. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  12. Implementation of a Sage-Based Stirling Model Into a System-Level Numerical Model of the Fission Power System Technology Demonstration Unit

    Science.gov (United States)

    Briggs, Maxwell H.

    2011-01-01

    The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.

  13. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...... of validating finite element (FE) simulations and gaining a better understanding of HTS machines. 3D FE simulations of the machine are compared to measured current vs. voltage (IV) curves for the tape on its own. It is validated that this method can be used to predict the critical current of the HTS tape...... installed in the machine. The measured torque as a function of rotor position is also reproduced by the 3D FE model....

  14. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    Science.gov (United States)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  15. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  16. Teaching genetics using hands-on models, problem solving, and inquiry-based methods

    Science.gov (United States)

    Hoppe, Stephanie Ann

    Teaching genetics can be challenging because of the difficulty of the content and misconceptions students might hold. This thesis focused on using hands-on model activities, problem solving, and inquiry-based teaching/learning methods in order to increase student understanding in an introductory biology class in the area of genetics. Various activities using these three methods were implemented into the classes to address any misconceptions and increase student learning of the difficult concepts. The activities that were implemented were shown to be successful based on pre-post assessment score comparison. The students were assessed on the subjects of inheritance patterns, meiosis, and protein synthesis and demonstrated growth in all of the areas. It was found that hands-on models, problem solving, and inquiry-based activities were more successful in learning concepts in genetics and the students were more engaged than tradition styles of lecture.

  17. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    Science.gov (United States)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  18. Modeling U-Shaped Exposure-Response Relationships for Agents that Demonstrate Toxicity Due to Both Excess and Deficiency.

    Science.gov (United States)

    Milton, Brittany; Farrell, Patrick J; Birkett, Nicholas; Krewski, Daniel

    2017-02-01

    Essential elements such as copper and manganese may demonstrate U-shaped exposure-response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure-response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U-shaped exposure-response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed-form expression for the point at which the exposure-response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure-response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U-shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest. © 2016 Society for Risk Analysis.

  19. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  20. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  1. animation : An R Package for Creating Animations and Demonstrating Statistical Methods

    Directory of Open Access Journals (Sweden)

    Yihui Xie

    2013-04-01

    Full Text Available Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (resampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required.Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject.

  2. Motivational Interview Method Based on Transtheoretical Model of Health Behaviour Change in Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Alime Selcuk Tosun

    2016-03-01

    Full Text Available Precautions taken in early stages of diabetes mellitus are more beneficial in terms of quality of life. The risk of Type 2 diabetes mellitus has been shown to be reduced at rates up to 58% or its emergence may be delayed with healthy lifestyle changes in different studies. Transtheoretical model and motivational interview method are especially used to increase the adaptation of individuals to disease management and to change behaviours about diabetes mellitus for decreasing or preventing the harmful effects of diabetes mellitus in studies conducted with individuals with Type 2 diabetes mellitus. Interventions using motivational interview method based on transtheoretical model demonstrated that a general improvement in glycaemic control and in physical activity level can be achieved and significant progress has been made during the stage of change. Motivational interview method based on transtheoretical model is an easy and efficient counselling method to reach behavioural change. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 32-41

  3. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  4. Demonstration of innovative monitoring technologies at the Savannah River Integrated Demonstration Site

    Energy Technology Data Exchange (ETDEWEB)

    Rossabi, J. [Westinghouse Savannah River Co., Aiken, SC (United States); Jenkins, R.A.; Wise, M.B. [Oak Ridge National Lab., TN (United States)] [and others

    1993-12-31

    The Department of Energy`s Office of Technology Development initiated an Integrated Demonstration Program at the Savannah River Site in 1989. The objective of this program is to develop, demonstrate, and evaluate innovative technologies that can improve present-day environmental restoration methods. The Integrated Demonstration Program at SRS is entitled ``Cleanup of Organics in Soils and Groundwater at Non-Arid Sites.`` New technologies in the areas of drilling, characterization, monitoring, and remediation are being demonstrated and evaluated for their technical performance and cost effectiveness in comparison with baseline technologies. Present site characterization and monitoring methods are costly, time-consuming, overly invasive, and often imprecise. Better technologies are required to accurately describe the subsurface geophysical and geochemical features of a site and the nature and extent of contamination. More efficient, nonintrusive characterization and monitoring techniques are necessary for understanding and predicting subsurface transport. More reliable procedures are also needed for interpreting monitoring and characterization data. Site characterization and monitoring are key elements in preventing, identifying, and restoring contaminated sites. The remediation of a site cannot be determined without characterization data, and monitoring may be required for 30 years after site closure.

  5. Demonstration of innovative monitoring technologies at the Savannah River Integrated Demonstration Site

    International Nuclear Information System (INIS)

    Rossabi, J.; Jenkins, R.A.; Wise, M.B.

    1993-01-01

    The Department of Energy's Office of Technology Development initiated an Integrated Demonstration Program at the Savannah River Site in 1989. The objective of this program is to develop, demonstrate, and evaluate innovative technologies that can improve present-day environmental restoration methods. The Integrated Demonstration Program at SRS is entitled ''Cleanup of Organics in Soils and Groundwater at Non-Arid Sites.'' New technologies in the areas of drilling, characterization, monitoring, and remediation are being demonstrated and evaluated for their technical performance and cost effectiveness in comparison with baseline technologies. Present site characterization and monitoring methods are costly, time-consuming, overly invasive, and often imprecise. Better technologies are required to accurately describe the subsurface geophysical and geochemical features of a site and the nature and extent of contamination. More efficient, nonintrusive characterization and monitoring techniques are necessary for understanding and predicting subsurface transport. More reliable procedures are also needed for interpreting monitoring and characterization data. Site characterization and monitoring are key elements in preventing, identifying, and restoring contaminated sites. The remediation of a site cannot be determined without characterization data, and monitoring may be required for 30 years after site closure

  6. Childhood Obesity Research Demonstration project: Cross-site evaluation method

    Science.gov (United States)

    The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which th...

  7. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  8. Innovative technology demonstrations

    International Nuclear Information System (INIS)

    Anderson, D.B.; Hartley, J.N.; Luttrell, S.P.

    1992-04-01

    Currently, several innovative technologies are being demonstrated at Tinker Air Force Base (TAFB) to address specific problems associated with remediating two contaminated test sites at the base. Cone penetrometer testing (CPT) is a form of testing that can rapidly characterize a site. This technology was selected to evaluate its applicability in the tight clay soils and consolidated sandstone sediments found at TAFB. Directionally drilled horizontal wells have been successfully installed at the US Department of Energy's (DOE) Savannah River Site to test new methods of in situ remediation of soils and ground water. This emerging technology was selected as a method that may be effective in accessing contamination beneath Building 3001 without disrupting the mission of the building, and in enhancing the extraction of contamination both in ground water and in soil. A soil gas extraction (SGE) demonstration, also known as soil vapor extraction, will evaluate the effectiveness of SGE in remediating fuels and TCE contamination contained in the tight clay soil formations surrounding the abandoned underground fuel storage vault located at the SW Tanks Site. In situ sensors have recently received much acclaim as a technology that can be effective in remediating hazardous waste sites. Sensors can be useful for determining real-time, in situ contaminant concentrations during the remediation process for performance monitoring and in providing feedback for controlling the remediation process. A demonstration of two in situ sensor systems capable of providing real-time data on contamination levels will be conducted and evaluated concurrently with the SGE demonstration activities. Following the SGE demonstration, the SGE system and SW Tanks test site will be modified to demonstrate bioremediation as an effective means of degrading the remaining contaminants in situ

  9. Model-independent separation of poorly resolved hypperfine split spectra by a linear combination method

    International Nuclear Information System (INIS)

    Nagy, D.L.; Dengler, J.; Ritter, G.

    1988-01-01

    A model-independent evaluation of the components of poorly resolved Moessbauer spectra based on a linear combination method is possible if there is a parameter as a function of which the shape of the individual components do not but their intensities do change and the dependence of the intensities on this parameter is known. The efficiency of the method is demonstrated on the example of low temperature magnetically split spectra of the high-T c superconductor YBa 2 (Cu 0.9 Fe 0 .1 ) 3 O 7-y . (author)

  10. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  11. Rapid Energy Modeling Workflow Demonstration

    Science.gov (United States)

    2013-10-31

    trail at AutodeskVasari.com Considered a lightweight version of Revit for energy modeling and analysis Many capabilities are in process of...Journal of Hospitality & Tourism Research 32(1):3-21. DOD (2005) Energy Managers Handbook. Retrieved from www.wbdg.org/ccb/DOD/DOD4/dodemhb.pdf

  12. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  13. A method for climate and vegetation reconstruction through the inversion of a dynamic vegetation model

    Energy Technology Data Exchange (ETDEWEB)

    Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)

    2010-08-15

    Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)

  14. TRAC methods and models

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  15. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  16. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  17. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  18. Assessment of the Stakeholders’ Importance Using AHP MethodModeling and Application

    Directory of Open Access Journals (Sweden)

    Danka Knezević

    2015-05-01

    Full Text Available Attention to stakeholders, which means that companies bear responsibility for the implications of their actions, is emerging as a critical strategic issue. Hence, meeting legitimate stakeholders’ requests would enhance the reputation of a company and increase its competitiveness on product markets. That is why an accurate identification of stakeholders and assessment of their importance is so significant for the companies. Through an integration of the earlier models of excellence, models for identification and classification of stakeholders, models for assessing the quality of a company and the AHP method, widely applicable in various fields, a new model for assessment of stakeholders’ significance is proposed in this paper. The model also provides an assessment of a company based on the degree of the importance and satisfaction of stakeholders. The results of this model could be useful for companies and their management when it comes to defining a proper business strategy, monitoring the system changes over time, creating a basis for comparison with other similar systems or with itself. A practical example is given to demonstrate the effectiveness of the model.

  19. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  20. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  1. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  2. Methods for histochemical demonstration of vascular structures at the muscle-bone interface from cryostate sections of demineralized tissue

    DEFF Research Database (Denmark)

    Kirkeby, S

    1981-01-01

    In tissue decalcified with MgNa2EDTA at a neutral pH activity for ATPase can used be for demonstration of the vascular structures at the muscle-bone interface. The GOMORI method for alkaline phosphatase is only of value, when fresh unfixed tissue is to be examined. The azo-dye method for alkaline...... phosphatase failed to give satisfactory results, and so did the alpha-amylase PAS method. 5'-nucleotidase activity is present in both capillaries and in cells lining the surfaces of bones, while larger blood vessels are poorly stained....

  3. Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.

    Science.gov (United States)

    Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok

    2018-06-01

    To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.

  4. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  5. Facility Modeling Capability Demonstration Summary Report

    International Nuclear Information System (INIS)

    Key, Brian P.; Sadasivan, Pratap; Fallgren, Andrew James; Demuth, Scott Francis; Aleman, Sebastian E.; Almeida, Valmor F. de; Chiswell, Steven R.; Hamm, Larry; Tingey, Joel M.

    2017-01-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration's (NNSA's) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  6. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  7. Coherence method of identifying signal noise model

    International Nuclear Information System (INIS)

    Vavrin, J.

    1981-01-01

    The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)

  8. Iterative method for Amado's model

    International Nuclear Information System (INIS)

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  11. Microstructure-based numerical modeling method for effective permittivity of ceramic/polymer composites

    Science.gov (United States)

    Jylhä, Liisi; Honkamo, Johanna; Jantunen, Heli; Sihvola, Ari

    2005-05-01

    Effective permittivity was modeled and measured for composites that consist of up to 35vol% of titanium dioxide powder dispersed in a continuous epoxy matrix. The study demonstrates a method that enables fast and accurate numerical modeling of the effective permittivity values of ceramic/polymer composites. The model requires electrostatic Monte Carlo simulations, where randomly oriented homogeneous prism-shaped inclusions occupy random positions in the background phase. The computation cost of solving the electrostatic problem by a finite-element code is decreased by the use of an averaging method where the same simulated sample is solved three times with orthogonal field directions. This helps to minimize the artificial anisotropy that results from the pseudorandomness inherent in the limited computational domains. All the required parameters for numerical simulations are calculated from the lattice structure of titanium dioxide. The results show a very good agreement between the measured and numerically calculated effective permittivities. When the prisms are approximated by oblate spheroids with the corresponding axial ratio, a fairly good prediction for the effective permittivity of the mixture can be achieved with the use of an advanced analytical mixing formula.

  12. Complete direct method for electron-hydrogen scattering: Application to the collinear and Temkin-Poet models

    International Nuclear Information System (INIS)

    Bartlett, Philip L.; Stelbovics, Andris T.

    2004-01-01

    We present an efficient generalization of the exterior complex scaling (ECS) method to extract discrete inelastic and ionization amplitudes for electron-impact scattering of atomic hydrogen. This fully quantal method is demonstrated over a range of energies for the collinear and Temkin-Poet models and near-threshold ionization is examined in detail for singlet and triplet scattering. Our numerical calculations for total ionization cross sections near threshold strongly support the classical threshold law of Wannier [Phys. Rev. 90, 817 (1953)] (σ∝E 1.128±0.004 ) for the L=0 singlet collinear model and the semiclassical threshold law of Peterkop [J. Phys. B 16, L587 (1983)] (σ∝E 3.37±0.02 ) for the L=0 triplet collinear model, and are consistent with the semiclassical threshold law of Macek and Ihra [Phys. Rev. A 55, 2024 (1997)] (σ∝exp[(-6.87±0.01)E -1/6 ]) for the singlet Temkin-Poet model

  13. Graphite Isotope Ratio Method Development Report: Irradiation Test Demonstration of Uranium as a Low Fluence Indicator

    International Nuclear Information System (INIS)

    Reid, B.D.; Gerlach, D.C.; Love, E.F.; McNeece, J.P.; Livingston, J.V.; Greenwood, L.R.; Petersen, S.L.; Morgan, W.C.

    1999-01-01

    This report describes an irradiation test designed to investigate the suitability of uranium as a graphite isotope ratio method (GIRM) low fluence indicator. GIRM is a demonstrated concept that gives a graphite-moderated reactor's lifetime production based on measuring changes in the isotopic ratio of elements known to exist in trace quantities within reactor-grade graphite. Appendix I of this report provides a tutorial on the GIRM concept

  14. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  15. The time has come for new models in febrile neutropenia: a practical demonstration of the inadequacy of the MASCC score.

    Science.gov (United States)

    Carmona-Bayonas, A; Jiménez-Fonseca, P; Virizuela Echaburu, J; Sánchez Cánovas, M; Ayala de la Peña, F

    2017-09-01

    Since its publication more than 15 years ago, the MASCC score has been internationally validated any number of times and recommended by most clinical practice guidelines for the management of febrile neutropenia (FN) around the world. We have used an empirical data-supported simulated scenario to demonstrate that, despite everything, the MASCC score is impractical as a basis for decision-making. A detailed analysis of reasons supporting the clinical irrelevance of this model is performed. First, seven of its eight variables are "innocent bystanders" that contribute little to selecting low-risk candidates for ambulatory management. Secondly, the training series was hardly representative of outpatients with solid tumors and low-risk FN. Finally, the simultaneous inclusion of key variables both in the model and in the outcome explains its successful validation in various series of patients. Alternative methods of prognostic classification, such as the Clinical Index of Stable Febrile Neutropenia, have been specifically validated for patients with solid tumors and should replace the MASCC model in situations of clinical uncertainty.

  16. Optics Demonstration with Student Eyeglasses Using the Inquiry Method

    Science.gov (United States)

    James, Mark C.

    2011-01-01

    A favorite qualitative optics demonstration I perform in introductory physics classes makes use of students' eyeglasses to introduce converging and diverging lenses. Taking on the persona of a magician, I walk to the back of the classroom and approach a student wearing glasses. The top part of Fig. 1 shows a glasses-wearing student who is…

  17. Development of an environment-insensitive PWR radial reflector model applicable to modern nodal reactor analysis method

    International Nuclear Information System (INIS)

    Mueller, E.M.

    1989-05-01

    This research is concerned with the development and analysis of methods for generating equivalent nodal diffusion parameters for the radial reflector of a PWR. The requirement that the equivalent reflector data be insensitive to changing core conditions is set as a principle objective. Hence, the environment dependence of the currently most reputable nodal reflector models, almost all of which are based on the nodal equivalence theory homgenization methods of Koebke and Smith, is investigated in detail. For this purpose, a special 1-D nodal equivalence theory reflector model, called the NGET model, is developed and used in 1-D and 2-D numerical experiments. The results demonstrate that these modern radial reflector models exhibit sufficient sensitivity to core conditions to warrant the development of alternative models. A new 1-D nodal reflector model, which is based on a novel combination of the nodal equivalence theory and the response matrix homogenization methods, is developed. Numerical results varify that this homogenized baffle/reflector model, which is called the NGET-RM model, is highly insensitive to changing core conditions. It is also shown that the NGET-RM model is not inferior to any of the existing 1-D nodal reflector models and that it has features which makes it an attractive alternative model for multi-dimensional reactor analysis. 61 refs., 40 figs., 36 tabs

  18. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  19. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  20. Machine learning methods for locating re-entrant drivers from electrograms in a model of atrial fibrillation

    Science.gov (United States)

    McGillivray, Max Falkenberg; Cheng, William; Peters, Nicholas S.; Christensen, Kim

    2018-04-01

    Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.

  1. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    Science.gov (United States)

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  2. Heterogeneity and contaminant transport modeling for the Savannah River integrated demonstration site

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1992-11-01

    The effectiveness of remediating aquifers and vadose zone sediments is frequently controlled by spatial heterogeneities. A continuing and long-recognized problem in selecting, planning, implementing, and operating remediation projects is the development of methods for quantitatively describing heterogeneity and predicting its effects on process performance. The similarity to and differences from modeling oil recovery processes in the petroleum industry are illustrated by the extension to contaminant extraction processes of an analytic model originally developed for waterflooding petroleum reservoirs. The resulting equations incorporate the effects of heterogeneity through a single parameter, σ. Fitting this model to the Savannah River in situ Air Stripping test data suggests that the injection of air into a horizontal well below the water table may have improved performance by changing the flow pattern in the vadose zone. This change increased the capture volume, and consequently the contaminant mass inventory, of the horizontal injection well completed in the vadose zone. The apparent increases (compared to extraction only from the horizontal well) are from 10,200 to 21,000 pounds for TCE and from 3,600 pounds to 59,800 pounds for PCE. The predominance of PCE in this calculated increase suggests that redistribution of flow paths in the vadose zone, rather than in-situ stripping, may provide most of the improvement. Although this preliminary conclusion remains to be reinforced by more sophisticated modeling currently in progress, there appears to be a definite improvement, which is attributable to air injection, over conventional remediation methods

  3. A Novel Method of Modeling the Deformation Resistance for Clad Sheet

    International Nuclear Information System (INIS)

    Hu Jianliang; Yi Youping; Xie Mantang

    2011-01-01

    Because of the excellent thermal conductivity, the clad sheet (3003/4004/3003) of aluminum alloy is extensively used in various heat exchangers, such as radiator, motorcar air conditioning, evaporator, and so on. The deformation resistance model plays an important role in designing the process parameters of hot continuous rolling. However, the complex behaviors of the plastic deformation of the clad sheet make the modeling very difficult. In this work, a novel method for modeling the deformation resistance of clad sheet was proposed by combining the finite element analysis with experiments. The deformation resistance model of aluminum 3003 and 4004 was proposed through hot compression test on the Gleeble-1500 thermo-simulation machine. And the deformation resistance model of clad sheet was proposed through finite element analysis using DEFORM-2D software. The relationship between cladding ratio and the deformation resistance was discussed in detail. The results of hot compression simulation demonstrate that the cladding ratio has great effects on the resistance of the clad sheet. Taking the cladding ratio into consideration, the mathematical model of the deformation resistance for clad sheet has been proved to have perfect forecasting precision of different cladding ratio. Therefore, the presented model can be used to predict the rolling force of clad sheet during the hot continuous rolling process.

  4. SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tetiana D. Panchenko

    2016-07-01

    Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.

  5. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  6. Building Energy Modeling and Control Methods for Optimization and Renewables Integration

    Science.gov (United States)

    Burger, Eric M.

    dynamics within a building by learning from sensor data. Control techniques encompass the application of optimal control theory, model predictive control, and convex distributed optimization to TCLs. First, we present the alternative control trajectory (ACT) representation, a novel method for the approximate optimization of non-convex discrete systems. This approach enables the optimal control of a population of non-convex agents using distributed convex optimization techniques. Second, we present a distributed convex optimization algorithm for the control of a TCL population. Experimental results demonstrate the application of this algorithm to the problem of renewable energy generation following. This dissertation contributes to the development of intelligent energy management systems for buildings by presenting a suite of novel and adaptable modeling and control techniques. Applications focus on optimizing the performance of building operations and on facilitating the integration of renewable energy resources.

  7. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  8. A Penalty Method to Model Particle Interactions in DNA-laden Flows

    International Nuclear Information System (INIS)

    Trebotich, D; Miller, G H; Bybee, M D

    2006-01-01

    We present a hybrid fluid-particle algorithm to simulate flow and transport of DNA-laden fluids in microdevices. Relevant length scales in microfluidic systems range from characteristic channel sizes of millimeters to micron scale geometric variation (e.g., post arrays) to 10 nanometers for the length of a single rod in a bead-rod polymer representation of a biological material such as DNA. The method is based on a previous fluid-particle algorithm in which long molecules are represented as a chain of connected rods, but in which the physically unrealistic behavior of rod crossing occurred. We have extended this algorithm to include screened Coulombic forces between particles by implementing a Debye-Hueckel potential acting between rods. In the method an unsteady incompressible Newtonian fluid is discretized with a second-order finite difference method in the interior of the Cartesian grid domain; an embedded boundary volume-of-fluid formulation is used near boundaries. The bead-rod polymer model is fully coupled to the solvent through body forces representing hydrodynamic drag and stochastic thermal fluctuations. While intrapolymer interactions are modeled by a soft potential, polymer-structure interactions are treated as perfectly elastic collisions. We demonstrate this method on flow and transport of a polymer through a post array microchannel in 2D where the polymer incorporates more realistic physical parameters of DNA, and compare to previous simulations where rods are allowed to cross. We also show that the method is capable of simulating 3D flow in a packed bed micro-column

  9. A Penalty Method to Model Particle Interactions in DNA-laden Flows

    Energy Technology Data Exchange (ETDEWEB)

    Trebotich, D; Miller, G H; Bybee, M D

    2006-10-06

    We present a hybrid fluid-particle algorithm to simulate flow and transport of DNA-laden fluids in microdevices. Relevant length scales in microfluidic systems range from characteristic channel sizes of millimeters to micron scale geometric variation (e.g., post arrays) to 10 nanometers for the length of a single rod in a bead-rod polymer representation of a biological material such as DNA. The method is based on a previous fluid-particle algorithm in which long molecules are represented as a chain of connected rods, but in which the physically unrealistic behavior of rod crossing occurred. We have extended this algorithm to include screened Coulombic forces between particles by implementing a Debye-Hueckel potential acting between rods. In the method an unsteady incompressible Newtonian fluid is discretized with a second-order finite difference method in the interior of the Cartesian grid domain; an embedded boundary volume-of-fluid formulation is used near boundaries. The bead-rod polymer model is fully coupled to the solvent through body forces representing hydrodynamic drag and stochastic thermal fluctuations. While intrapolymer interactions are modeled by a soft potential, polymer-structure interactions are treated as perfectly elastic collisions. We demonstrate this method on flow and transport of a polymer through a post array microchannel in 2D where the polymer incorporates more realistic physical parameters of DNA, and compare to previous simulations where rods are allowed to cross. We also show that the method is capable of simulating 3D flow in a packed bed micro-column.

  10. Mathematical modeling of the drying of extruded fish feed and its experimental demonstration

    DEFF Research Database (Denmark)

    Haubjerg, Anders Fjeldbo; Simonsen, B.; Løvgreen, S.

    This paper present a mathematical model for the drying of extruded fish feed pellets. The model relies on conservation balances for moisture and energy. Sorption isotherms from literature are used together with diffusion and transfer coefficients obtained from dual parameter regression analysis...... against experimental data. The lumped capacitance method for the estimation of the heat transfer coefficient is used. The model performs well at temperatures ± 5 °C from sorption isotherm specificity, and for different pellet sizes. There is a slight under-estimation of surface temperature of denser feed...

  11. Modeling open nanophotonic systems using the Fourier modal method: generalization to 3D Cartesian coordinates.

    Science.gov (United States)

    Häyrynen, Teppo; Osterkryger, Andreas Dyhl; de Lasson, Jakob Rosenkrantz; Gregersen, Niels

    2017-09-01

    Recently, an open geometry Fourier modal method based on a new combination of an open boundary condition and a non-uniform k-space discretization was introduced for rotationally symmetric structures, providing a more efficient approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A33, 1298 (2016)JOAOD61084-752910.1364/JOSAA.33.001298]. Here, we generalize the approach to three-dimensional (3D) Cartesian coordinates, allowing for the modeling of rectangular geometries in open space. The open boundary condition is a consequence of having an infinite computational domain described using basis functions that expand the whole space. The strength of the method lies in discretizing the Fourier integrals using a non-uniform circular "dartboard" sampling of the Fourier k space. We show that our sampling technique leads to a more accurate description of the continuum of the radiation modes that leak out from the structure. We also compare our approach to conventional discretization with direct and inverse factorization rules commonly used in established Fourier modal methods. We apply our method to a variety of optical waveguide structures and demonstrate that the method leads to a significantly improved convergence, enabling more accurate and efficient modeling of open 3D nanophotonic structures.

  12. Combining phase-field crystal methods with a Cahn-Hilliard model for binary alloys

    Science.gov (United States)

    Balakrishna, Ananya Renuka; Carter, W. Craig

    2018-04-01

    Diffusion-induced phase transitions typically change the lattice symmetry of the host material. In battery electrodes, for example, Li ions (diffusing species) are inserted between layers in a crystalline electrode material (host). This diffusion induces lattice distortions and defect formations in the electrode. The structural changes to the lattice symmetry affect the host material's properties. Here, we propose a 2D theoretical framework that couples a Cahn-Hilliard (CH) model, which describes the composition field of a diffusing species, with a phase-field crystal (PFC) model, which describes the host-material lattice symmetry. We couple the two continuum models via coordinate transformation coefficients. We introduce the transformation coefficients in the PFC method to describe affine lattice deformations. These transformation coefficients are modeled as functions of the composition field. Using this coupled approach, we explore the effects of coarse-grained lattice symmetry and distortions on a diffusion-induced phase transition process. In this paper, we demonstrate the working of the CH-PFC model through three representative examples: First, we describe base cases with hexagonal and square symmetries for two composition fields. Next, we illustrate how the CH-PFC method interpolates lattice symmetry across a diffuse phase boundary. Finally, we compute a Cahn-Hilliard type of diffusion and model the accompanying changes to lattice symmetry during a phase transition process.

  13. A case study of forward calculations of the gravity anomaly by spectral method for a three-dimensional parameterised fault model

    Science.gov (United States)

    Xu, Weimin; Chen, Shi

    2018-02-01

    Spectral methods provide many advantages for calculating gravity anomalies. In this paper, we derive a kernel function for a three-dimensional (3D) fault model in the wave number domain, and present the full Fortran source code developed for the forward computation of the gravity anomalies and related derivatives obtained from the model. The numerical error and computing speed obtained using the proposed spectral method are compared with those obtained using a 3D rectangular prism model solved in the space domain. The error obtained using the spectral method is shown to be dependent on the sequence length employed in the fast Fourier transform. The spectral method is applied to some examples of 3D fault models, and is demonstrated to be a straightforward and alternative computational approach to enhance computational speed and simplify the procedures for solving many gravitational potential forward problems involving complicated geological models. The proposed method can generate a great number of feasible geophysical interpretations based on a 3D model with only a few variables, and can thereby improve the efficiency of inversion.

  14. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  15. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  16. Structure-Preserving Methods for the Navier-Stokes-Cahn-Hilliard System to Model Immiscible Fluids

    KAUST Repository

    Sarmiento, Adel F.

    2017-12-03

    This work presents a novel method to model immiscible incompressible fluids in a stable manner. Here, the immiscible behavior of the flow is described by the incompressible Navier-Stokes-Cahn-Hilliard model, which is based on a diffuse interface method. We introduce buoyancy effects in the model through the Boussinesq approximation in a consistent manner. A structure-preserving discretization is used to guarantee the linear stability of the discrete problem and to satisfy the incompressibility of the discrete solution at every point in space by construction. For the solution of the model, we developed the Portable Extensible Toolkit for Isogeometric Analysis with Multi-Field discretizations (PetIGA-MF), a high-performance framework that supports structure-preserving spaces. PetIGA-MF is built on top of PetIGA and the Portable Extensible Toolkit for Scientific Computation (PETSc), sharing all their user-friendly, performance, and flexibility features. Herein, we describe the implementation of our model in PetIGA-MF and the details of the numerical solution. With several numerical tests, we verify the convergence, scalability, and validity of our approach. We use highly-resolved numerical simulations to analyze the merging and rising of droplets. From these simulations, we detailed the energy exchanges in the system to evaluate quantitatively the quality of our simulations. The good agreement of our results when compared against theoretical descriptions of the merging, and the small errors found in the energy analysis, allow us to validate our approach. Additionally, we present the development of an unconditionally energy-stable generalized-alpha method for the Swift-Hohenberg model that offers control over the numerical dissipation. A pattern formation example demonstrates the energy-stability and convergence of our method.

  17. Ground truth methods for optical cross-section modeling of biological aerosols

    Science.gov (United States)

    Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.

    2011-05-01

    Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.

  18. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  19. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  20. Accelerated reliability demonstration under competing failure modes

    International Nuclear Information System (INIS)

    Luo, Wei; Zhang, Chun-hua; Chen, Xun; Tan, Yuan-yuan

    2015-01-01

    The conventional reliability demonstration tests are difficult to apply to products with competing failure modes due to the complexity of the lifetime models. This paper develops a testing methodology based on the reliability target allocation for reliability demonstration under competing failure modes at accelerated conditions. The specified reliability at mission time and the risk caused by sampling of the reliability target for products are allocated for each failure mode. The risk caused by degradation measurement fitting of the target for a product involving performance degradation is equally allocated to each degradation failure mode. According to the allocated targets, the accelerated life reliability demonstration test (ALRDT) plans for the failure modes are designed. The accelerated degradation reliability demonstration test plans and the associated ALRDT plans for the degradation failure modes are also designed. Next, the test plan and the decision rules for the products are designed. Additionally, the effects of the discreteness of sample size and accepted number of failures for failure modes on the actual risks caused by sampling for the products are investigated. - Highlights: • Accelerated reliability demonstration under competing failure modes is studied. • The method is based on the reliability target allocation involving the risks. • The test plan for the products is based on the plans for all the failure modes. • Both failure mode and degradation failure modes are considered. • The error of actual risks caused by sampling for the products is small enough

  1. An Assessment of Mean Areal Precipitation Methods on Simulated Stream Flow: A SWAT Model Performance Assessment

    Directory of Open Access Journals (Sweden)

    Sean Zeiger

    2017-06-01

    Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.

  2. Modeling influenza-like illnesses through composite compartmental models

    Science.gov (United States)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  3. Didactic demonstrations of superfluidity and superconductivity phenomena

    International Nuclear Information System (INIS)

    Aniola-Jedrzejak, L.; Lewicki, A.; Pilipowicz, A.; Tarnawski, Z.; Bialek, H.

    1980-01-01

    In order to demonstrate to students phenomena of superfluidity and superconductivity a special helium cryostat has been constructed. The demonstrated effects, construction of the cryostat and the method of demonstration are described. (author)

  4. A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method

    Science.gov (United States)

    Chakraborty, Shankar; Chatterjee, Prasenjit

    2017-12-01

    Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.

  5. A new lattice hydrodynamic model based on control method considering the flux change rate and delay feedback signal

    Science.gov (United States)

    Qin, Shunda; Ge, Hongxia; Cheng, Rongjun

    2018-02-01

    In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.

  6. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  7. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  8. Volume-weighted particle-tracking method for solute-transport modeling; Implementation in MODFLOW–GWT

    Science.gov (United States)

    Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.

    2018-02-16

    In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.

  9. A kriging metamodel-assisted robust optimization method based on a reverse model

    Science.gov (United States)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  10. The development and demonstration of integrated models for the evaluation of severe accident management strategies - SAMEM

    International Nuclear Information System (INIS)

    Ang, M.L.; Peers, K.; Kersting, E.; Fassmann, W.; Tuomisto, H.; Lundstroem, P.; Helle, M.; Gustavsson, V.; Jacobsson, P.

    2001-01-01

    This study is concerned with the further development of integrated models for the assessment of existing and potential severe accident management (SAM) measures. This paper provides a brief summary of these models, based on Probabilistic Safety Assessment (PSA) methods and the Risk Oriented Accident Analysis Methodology (ROAAM) approach, and their application to a number of case studies spanning both preventive and mitigative accident management regimes. In the course of this study it became evident that the starting point to guide the selection of methodology and any further improvement is the intended application. Accordingly, such features as the type and area of application and the confidence requirement are addressed in this project. The application of an integrated ROAAM approach led to the implementation, at the Loviisa NPP, of a hydrogen mitigation strategy, which requires substantial plant modifications. A revised level 2 PSA model was applied to the Sizewell B NPP to assess the feasibility of the in-vessel retention strategy. Similarly the application of PSA based models was extended to the Barseback and Ringhals 2 NPPs to improve the emergency operating procedures, notably actions related to manual operations. A human reliability analysis based on the Human Cognitive Reliability (HCR) and Technique For Human Error Rate (THERP) models was applied to a case study addressing secondary and primary bleed and feed procedures. Some aspects pertinent to the quantification of severe accident phenomena were further examined in this project. A comparison of the applications of PSA based approach and ROAAM to two severe accident issues, viz hydrogen combustion and in-vessel retention, was made. A general conclusion is that there is no requirement for further major development of the PSA and ROAAM methodologies in the modelling of SAM strategies for a variety of applications as far as the technical aspects are concerned. As is demonstrated in this project, the

  11. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  12. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  13. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    Science.gov (United States)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  14. Modelling a coal subcrop using the impedance method

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering

    2000-07-01

    An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.

  15. The demonstration of nonlinear analytic model for the strain field induced by thermal copper filled TSVs (through silicon via

    Directory of Open Access Journals (Sweden)

    M. H. Liao

    2013-08-01

    Full Text Available The thermo-elastic strain is induced by through silicon vias (TSV due to the difference of thermal expansion coefficients between the copper (∼18 ppm/ °C and silicon (∼2.8 ppm/ °C when the structure is exposed to a thermal ramp budget in the three dimensional integrated circuit (3DIC process. These thermal expansion stresses are high enough to introduce the delamination on the interfaces between the copper, silicon, and isolated dielectric. A compact analytic model for the strain field induced by different layouts of thermal copper filled TSVs with the linear superposition principle is found to have large errors due to the strong stress interaction between TSVs. In this work, a nonlinear stress analytic model with different TSV layouts is demonstrated by the finite element method and the analysis of the Mohr's circle. The characteristics of stress are also measured by the atomic force microscope-raman technique with nanometer level space resolution. The change of the electron mobility with the consideration of this nonlinear stress model for the strong interactions between TSVs is ∼2–6% smaller in comparison with those from the consideration of the linear stress superposition principle only.

  16. Simplified microstrip discontinuity modeling using the transmission line matrix method interfaced to microwave CAD

    Science.gov (United States)

    Thompson, James H.; Apel, Thomas R.

    1990-07-01

    A technique for modeling microstrip discontinuities is presented which is derived from the transmission line matrix method of solving three-dimensional electromagnetic problems. In this technique the microstrip patch under investigation is divided into an integer number of square and half-square (triangle) subsections. An equivalent lumped-element model is calculated for each subsection. These individual models are then interconnected as dictated by the geometry of the patch. The matrix of lumped elements is then solved using either of two microwave CAD software interfaces with each port properly defined. Closed-form expressions for the lumped-element representation of the individual subsections is presented and experimentally verified through the X-band frequency range. A model demonstrating the use of symmetry and block construction of a circuit element is discussed, along with computer program development and CAD software interface.

  17. Linearization-based method for solving a multicomponent diffusion phase-field model with arbitrary solution thermodynamics

    Science.gov (United States)

    Welland, M. J.; Tenuta, E.; Prudil, A. A.

    2017-06-01

    This article describes a phase-field model for an isothermal multicomponent, multiphase system which avoids implicit interfacial energy contributions by starting from a grand potential formulation. A method is developed for incorporating arbitrary forms of the equilibrium thermodynamic potentials in all phases to determine an explicit relationship between chemical potentials and species concentrations. The model incorporates variable densities between adjacent phases, defect migration, and dependence of internal pressure on object dimensions ranging from the macro- to nanoscale. A demonstrative simulation of an overpressurized nanoscopic intragranular bubble in nuclear fuel migrating to a grain boundary under kinetically limited vacancy diffusion is shown.

  18. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  19. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  20. A computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi

    2016-11-15

    Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.

  1. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    Science.gov (United States)

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  2. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  3. RESOLVING THE QUESTION OF DOUBT: GEOMETRICAL DEMONSTRATION IN THE MEDITATIONS

    Directory of Open Access Journals (Sweden)

    Steven BURGESS

    2012-11-01

    Full Text Available The question of what Descartes did and did not doubt in the Meditations has received a significant amount of scholarly attention in recent years. The process of doubt in Meditation I gives one the impression of a rather extreme form of skepticism, while the responses Descartes offers in the Objections and Replies make it clear that there is in fact a whole background of presuppositions that are never doubted, including many that are never even entertained as possible candidates of doubt. This paper resolves the question of this undoubted background of rationality by taking seriously Descartes’ claim that he is carrying out demonstrations modeled after the great geometers. The rational order of geometrical demonstration demands that we first clear away previous demonstrations not proven with the certainty necessary for genuine science. This is accomplished by the method of doubt, which is only applied to the results of possible demonstrations. What cannot be doubted are the very concepts and principles employed in carrying out geometrical demonstration, which enable it to take place. It would be senseless to ask whether we can doubt the essential components of the structure through which questioning, doubting, and demonstration are made possible.

  4. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  5. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  6. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  7. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  8. Demonstration of finite element simulations in MOOSE using crystallographic models of irradiation hardening and plastic deformation

    Energy Technology Data Exchange (ETDEWEB)

    Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wen, Wei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martinez Saez, Enrique [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-31

    This report describes the implementation of a crystal plasticity framework (VPSC) for irradiation hardening and plastic deformation in the finite element code, MOOSE. Constitutive models for irradiation hardening and the crystal plasticity framework are described in a previous report [1]. Here we describe these models briefly and then describe an algorithm for interfacing VPSC with finite elements. Example applications of tensile deformation of a dog bone specimen and a 3D pre-irradiated bar specimen performed using MOOSE are demonstrated.

  9. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  10. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  11. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  12. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  13. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  14. Reliability demonstration of imaging surveillance systems

    International Nuclear Information System (INIS)

    Sheridan, T.F.; Henderson, J.T.; MacDiarmid, P.R.

    1979-01-01

    Security surveillance systems which employ closed circuit television are being deployed with increasing frequency for the protection of property and other valuable assets. A need exists to demonstrate the reliability of such systems before their installation to assure that the deployed systems will operate when needed with only the scheduled amount of maintenance and support costs. An approach to the reliability demonstration of imaging surveillance systems which employ closed circuit television is described. Failure definitions based on industry television standards and imaging alarm assessment criteria for surveillance systems are discussed. Test methods which allow 24 hour a day operation without the need for numerous test scenarios, test personnel and elaborate test facilities are presented. Existing reliability demonstration standards are shown to apply which obviate the need for elaborate statistical tests. The demonstration methods employed are shown to have applications in other types of imaging surveillance systems besides closed circuit television

  15. Coach simplified structure modeling and optimization study based on the PBM method

    Science.gov (United States)

    Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian

    2016-09-01

    Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.

  16. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  17. Explicit demonstration of the convergence of the close-coupling method for a Coulomb three-body problem

    International Nuclear Information System (INIS)

    Bray, I.; Stelbovics, A.T.

    1992-01-01

    Convergence as a function of the number of states is studied and demonstrated for the Poet-Temkin model of electron-hydrogen scattering. In this Coulomb three-body problem only the l=0 partial waves are treated. By taking as many as thirty target states, obtained by diagonalizing the target Hamiltonian in a Laguerre basis, complete agreement with the smooth results of Poet is obtained at all energies. We show that the often-encountered pseudoresonance features in the cross sections are simply an indication of an inadequate target state representation

  18. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  19. A new assessment method for demonstrating the sufficiency of the safety assessment and the safety margins of the geological disposal system

    International Nuclear Information System (INIS)

    Ohi, Takao; Kawasaki, Daisuke; Chiba, Tamotsu; Takase, Toshio; Hane, Koji

    2013-01-01

    A new method for demonstrating the sufficiency of the safety assessment and safety margins of the geological disposal system has been developed. The method is based on an existing comprehensive sensitivity analysis method and can systematically identify the successful conditions, under which the dose rate does not exceed specified safety criteria, using analytical solutions for nuclide migration and the results of a statistical analysis. The successful conditions were identified using three major variables. Furthermore, the successful conditions at the level of factors or parameters were obtained using relational equations between the variables and the factors or parameters making up these variables. In this study, the method was applied to the safety assessment of the geological disposal of transuranic waste in Japan. Based on the system response characteristics obtained from analytical solutions and on the successful conditions, the classification of the analytical conditions, the sufficiency of the safety assessment and the safety margins of the disposal system were then demonstrated. A new assessment procedure incorporating this method into the existing safety assessment approach is proposed in this study. Using this procedure, it is possible to conduct a series of safety assessment activities in a logical manner. (author)

  20. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  1. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  2. A "total parameter estimation" method in the varification of distributed hydrological models

    Science.gov (United States)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  3. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  4. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model...

  5. X-231B technology demonstration for in situ treatment of contaminated soil: Contaminant characterization and three dimensional spatial modeling

    International Nuclear Information System (INIS)

    West, O.R.; Siegrist, R.L.; Mitchell, T.J.; Pickering, D.A.; Muhr, C.A.; Greene, D.W.; Jenkins, R.A.

    1993-11-01

    Fine-textured soils and sediments contaminated by trichloroethylene (TCE) and other chlorinated organics present a serious environmental restoration challenge at US Department of Energy (DOE) sites. DOE and Martin Marietta Energy Systems, Inc. initiated a research and demonstration project at Oak Ridge National Laboratory. The goal of the project was to demonstrate a process for closure and environmental restoration of the X-231B Solid Waste Management Unit at the DOE Portsmouth Gaseous Diffusion Plant. The X-231B Unit was used from 1976 to 1983 as a land disposal site for waste oils and solvents. Silt and clay deposits beneath the unit were contaminated with volatile organic compounds and low levels of radioactive substances. The shallow groundwater was also contaminated, and some contaminants were at levels well above drinking water standards. This document begins with a summary of the subsurface physical and contaminant characteristics obtained from investigative studies conducted at the X-231B Unit prior to January 1992 (Sect. 2). This is then followed by a description of the sample collection and analysis methods used during the baseline sampling conducted in January 1992 (Sect. 3). The results of this sampling event were used to develop spatial models for VOC contaminant distribution within the X-231B Unit

  6. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  7. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Parameter Identification of Ship Maneuvering Models Using Recursive Least Square Method Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Man Zhu

    2017-03-01

    Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.

  9. Application of modeling methods for an estimation of a specific activity 137Cs geologic environment

    International Nuclear Information System (INIS)

    Kalinovskij, A.K.; Batij, V.G.; Pravdivyj, A.A.; Krasnov, V.A.

    2004-01-01

    The manual application of methods of mathematical and physical modeling for an estimation of the specific activity 137 Cs in soils composing a geological profile of site of object 'Ukryttya' is demonstrated. The calculations are executed by the software packages of Micro Shield, CYCLON, MCNP5. The experimental measurements are carried out by logging radiometers of different type on the borehole models. The value of a conversion coefficient of infinite environment for quantitative interpretation of gamma-ray logging data is determined over calculations outcomes and experimental measurements. The calculated and experimental values have agreement among themselves. the error estimation of the obtained outcomes is executed. 26 refs., 3 tab., 10 figs

  10. New method dynamically models hydrocarbon fractionation

    Energy Technology Data Exchange (ETDEWEB)

    Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)

    1995-10-01

    A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.

  11. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    J. Wesley Hines; Belle R. Upadhyaya; J. Michael Doster; Robert M. Edwards; Kenneth D. Lewis; Paul Turinsky; Jamie Coble

    2011-05-31

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  12. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  13. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  14. Diffuse interface methods for multiphase flow modeling

    International Nuclear Information System (INIS)

    Jamet, D.

    2004-01-01

    Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one

  15. Element-by-element parallel spectral-element methods for 3-D teleseismic wave modeling

    KAUST Repository

    Liu, Shaolin

    2017-09-28

    The development of an efficient algorithm for teleseismic wave field modeling is valuable for calculating the gradients of the misfit function (termed misfit gradients) or Fréchet derivatives when the teleseismic waveform is used for adjoint tomography. Here, we introduce an element-by-element parallel spectral-element method (EBE-SEM) for the efficient modeling of teleseismic wave field propagation in a reduced geology model. Under the plane-wave assumption, the frequency-wavenumber (FK) technique is implemented to compute the boundary wave field used to construct the boundary condition of the teleseismic wave incidence. To reduce the memory required for the storage of the boundary wave field for the incidence boundary condition, a strategy is introduced to efficiently store the boundary wave field on the model boundary. The perfectly matched layers absorbing boundary condition (PML ABC) is formulated using the EBE-SEM to absorb the scattered wave field from the model interior. The misfit gradient can easily be constructed in each time step during the calculation of the adjoint wave field. Three synthetic examples demonstrate the validity of the EBE-SEM for use in teleseismic wave field modeling and the misfit gradient calculation.

  16. An Adaptive Model Predictive Load Frequency Control Method for Multi-Area Interconnected Power Systems with Photovoltaic Generations

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2017-11-01

    Full Text Available As the penetration level of renewable distributed generations such as wind turbine generator and photovoltaic stations increases, the load frequency control issue of a multi-area interconnected power system becomes more challenging. This paper presents an adaptive model predictive load frequency control method for a multi-area interconnected power system with photovoltaic generation by considering some nonlinear features such as a dead band for governor and generation rate constraint for steam turbine. The dynamic characteristic of this system is formulated as a discrete-time state space model firstly. Then, the predictive dynamic model is obtained by introducing an expanded state vector, and rolling optimization of control signal is implemented based on a cost function by minimizing the weighted sum of square predicted errors and square future control values. The simulation results on a typical two-area power system consisting of photovoltaic and thermal generator have demonstrated the superiority of the proposed model predictive control method to these state-of-the-art control techniques such as firefly algorithm, genetic algorithm, and population extremal optimization-based proportional-integral control methods in cases of normal conditions, load disturbance and parameters uncertainty.

  17. Two updating methods for dissipative models with non symmetric matrices

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Aubry, D.

    1997-01-01

    In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)

  18. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  19. Feasibility study using hypothesis testing to demonstrate containment of radionuclides within waste packages

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1986-04-01

    The purpose of this report is to apply methods of statistical hypothesis testing to demonstrate the performance of containers of radioactive waste. The approach involves modeling the failure times of waste containers using Weibull distributions, making strong assumptions about the parameters. A specific objective is to apply methods of statistical hypothesis testing to determine the number of container tests that must be performed in order to control the probability of arriving at the wrong conclusions. An algorithm to determine the required number of containers to be tested with the acceptable number of failures is derived as a function of the distribution parameters, stated probabilities, and the desired waste containment life. Using a set of reference values for the input parameters, sample sizes of containers to be tested are calculated for demonstration purposes. These sample sizes are found to be excessively large, indicating that this hypothesis-testing framework does not provide a feasible approach for demonstrating satisfactory performance of waste packages for exceptionally long time periods

  20. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  1. Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Jeff

    2012-11-30

    collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.

  2. The Bruton Tyrosine Kinase (BTK) Inhibitor Acalabrutinib Demonstrates Potent On-Target Effects and Efficacy in Two Mouse Models of Chronic Lymphocytic Leukemia

    DEFF Research Database (Denmark)

    Herman, Sarah E M; Montraveta, Arnau; Niemann, Carsten U

    2017-01-01

    into the drinking water.Results: Utilizing biochemical assays, we demonstrate that acalabrutinib is a highly selective BTK inhibitor as compared with ibrutinib. In the human CLL NSG xenograft model, treatment with acalabrutinib demonstrated on-target effects, including decreased phosphorylation of PLCγ2, ERK......). In two complementary mouse models of CLL, acalabrutinib significantly reduced tumor burden and increased survival compared with vehicle treatment. Overall, acalabrutinib showed increased BTK selectivity compared with ibrutinib while demonstrating significant antitumor efficacy in vivo on par...... with ibrutinib. Clin Cancer Res; 23(11); 2831-41. ©2016 AACR....

  3. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  4. On Lack of Robustness in Hydrological Model Development Due to Absence of Guidelines for Selecting Calibration and Evaluation Data: Demonstration for Data-Driven Models

    Science.gov (United States)

    Zheng, Feifei; Maier, Holger R.; Wu, Wenyan; Dandy, Graeme C.; Gupta, Hoshin V.; Zhang, Tuqiao

    2018-02-01

    Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.

  5. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  6. Simulated first operating campaign for the Integral Fast Reactor fuel cycle demonstration

    International Nuclear Information System (INIS)

    Goff, K.M.; Mariani, R.D.; Benedict, R.W.; Park, K.H.; Ackerman, J.P.

    1993-01-01

    This report discusses the Integral Fast Reactor (IFR) which is an innovative liquid-metal-cooled reactor concept that is being developed by Argonne National Laboratory. It takes advantage of the properties of metallic fuel and liquid-metal cooling to offer significant improvements in reactor safety, operation, fuel cycle-economics, environmental protection, and safeguards. Over the next few years, the IFR fuel cycle will be demonstrated at Argonne-West in Idaho. Spent fuel from the Experimental Breeder Reactor II (EBR-II) win be processed in its associated Fuel Cycle Facility (FCF) using a pyrochemical method that employs molten salts and liquid metals in an electrorefining operation. As part of the preparation for the fuel cycle demonstration, a computer code, PYRO, was developed at Argonne to model the electrorefining operation using thermodynamic and empirical data. This code has been used extensively to evaluate various operating strategies for the fuel cycle demonstration. The modeled results from the first operating campaign are presented. This campaign is capable of processing more than enough material to refuel completely the EBR-II core

  7. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  8. Innovative technology demonstrations

    International Nuclear Information System (INIS)

    Anderson, D.B.; Luttrell, S.P.; Hartley, J.N.

    1992-08-01

    Environmental Management Operations (EMO) is conducting an Innovative Technology Demonstration Program for Tinker Air Force Base (TAFB). Several innovative technologies are being demonstrated to address specific problems associated with remediating two contaminated test sites at the base. Cone penetrometer testing (CPT) is a form of testing that can rapidly characterize a site. This technology was selected to evaluate its applicability in the tight clay soils and consolidated sandstone sediments found at TAFB. Directionally drilled horizontal wells was selected as a method that may be effective in accessing contamination beneath Building 3001 without disrupting the mission of the building, and in enhancing the extraction of contamination both in ground water and in soil. A soil gas extraction (SGE) demonstration, also known as soil vapor extraction, will evaluate the effectiveness of SGE in remediating fuels and TCE contamination contained in the tight clay soil formations surrounding the abandoned underground fuel storage vault located at the SW Tanks Site. In situ sensors have recently received much acclaim as a technology that can be effective in remediating hazardous waste sites. Sensors can be useful for determining real-time, in situ contaminant concentrations during the remediation process for performance monitoring and in providing feedback for controlling the remediation process. Following the SGE demonstration, the SGE system and SW Tanks test site will be modified to demonstrate bioremediation as an effective means of degrading the remaining contaminants in situ. The bioremediation demonstration will evaluate a bioventing process in which the naturally occurring consortium of soil bacteria will be stimulated to aerobically degrade soil contaminants, including fuel and TCE, in situ

  9. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  10. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  11. Static Aeroelastic Deformation Effects in Preliminary Wind-tunnel Tests of Silent Supersonic Technology Demonstrator

    OpenAIRE

    Makino, Yoshikazu; Ohira, Keisuke; Makimoto, Takuya; Mitomo, Toshiteru; 牧野, 好和; 大平, 啓介; 牧本, 卓也; 三友, 俊輝

    2011-01-01

    Effects of static aeroelastic deformation of a wind-tunnel test model on the aerodynamic characteristics are discussed in wind-tunnel tests in the preliminary design phase of the silent supersonic technology demonstrator (S3TD). The static aeroelastic deformation of the main wing is estimated for JAXA 2m x 2m transonic wind-tunnel and 1m x 1m supersonic wind-tunnel by a finite element method (FEM) structural analysis in which its structural model is tuned with the model deformation calibratio...

  12. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  14. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  15. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  16. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  17. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  18. Modelling of the automatic stabilization system of the aircraft course by a fuzzy logic method

    Science.gov (United States)

    Mamonova, T.; Syryamkin, V.; Vasilyeva, T.

    2016-04-01

    The problem of the present paper concerns the development of a fuzzy model of the system of an aircraft course stabilization. In this work modelling of the aircraft course stabilization system with the application of fuzzy logic is specified. Thus the authors have used the data taken for an ordinary passenger plane. As a result of the study the stabilization system models were realised in the environment of Matlab package Simulink on the basis of the PID-regulator and fuzzy logic. The authors of the paper have shown that the use of the method of artificial intelligence allows reducing the time of regulation to 1, which is 50 times faster than the time when standard receptions of the management theory are used. This fact demonstrates a positive influence of the use of fuzzy regulation.

  19. Derivation and experimental demonstration of the perturbed reactivity method for the determination of subcriticality

    International Nuclear Information System (INIS)

    Kwok, K.S.; Bernard, J.A.; Lanning, D.D.

    1992-01-01

    The perturbed reactivity method is a general technique for the estimation of reactivity. It is particularly suited to the determination of a reactor's initial degree of subcriticality and was developed to facilitate the automated startup of both spacecraft and multi-modular reactors using model-based control laws. It entails perturbing a shutdown reactor by the insertion of reactivity at a known rate and then estimating the initial degree of subcriticality from observation of the resulting reactor period. While similar to inverse kinetics, the perturbed reactivity method differs in that the net reactivity present in the core is treated as two separate entities. The first is that associated with the known perturbation. This quantity, together with the observed period and the reactor's describing parameters, are the inputs to the method's implementing algorithm. The second entity, which is the algorithm;s output, is the sum of all other reactivities including those resulting from inherent feedback and the initial degree of subcriticality. During an automated startup, feedback effects will be minimal. Hence, when applied to a shutdown reactor, the output of the perturbed reactivity method will be a constant that is equal to the initial degree of subcriticality. This is a major advantage because repeated estimates can be made of this one quantity and signal smoothing techniques can be applied to enhance accuracy. In addition to describing the theoretical basis for the perturbed reactivity method, factors involved in its implementation such as the movement of control devices other than those used to create the perturbation, source estimation, and techniques for data smoothing are presented

  20. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  1. Experiment and Modeling of ITER Demonstration Discharges in the DIII-D Tokamak

    International Nuclear Information System (INIS)

    Park, Jin Myung; Doyle, E. J.; Ferron, J.R.; Holcomb, C.T.; Jackson, G.L.; Lao, L.L.; Luce, T.C.; Owen, Larry W.; Murakami, Masanori; Osborne, T.H.; Politzer, P.A.; Prater, R.; Snyder, P.B.

    2011-01-01

    DIII-D is providing experimental evaluation of 4 leading ITER operational scenarios: the baseline scenario in ELMing H-mode, the advanced inductive scenario, the hybrid scenario, and the steady state scenario. The anticipated ITER shape, aspect ratio and value of I/αB were reproduced, with the size reduced by a factor of 3.7, while matching key performance targets for β N and H 98 . Since 2008, substantial experimental progress was made to improve the match to other expected ITER parameters for the baseline scenario. A lower density baseline discharge was developed with improved stationarity and density control to match the expected ITER edge pedestal collisionality (ν* e ∼ 0.1). Target values for β N and H 98 were maintained at lower collisionality (lower density) operation without loss in fusion performance but with significant change in ELM characteristics. The effects of lower plasma rotation were investigated by adding counter-neutral beam power, resulting in only a modest reduction in confinement. Robust preemptive stabilization of 2/1 NTMs was demonstrated for the first time using ECCD under ITER-like conditions. Data from these experiments were used extensively to test and develop theory and modeling for realistic ITER projection and for further development of its optimum scenarios in DIII-D. Theory-based modeling of core transport (TGLF) with an edge pedestal boundary condition provided by the EPED1 model reproduces T e and T i profiles reasonably well for the 4 ITER scenarios developed in DIII-D. Modeling of the baseline scenario for low and high rotation discharges indicates that a modest performance increase of ∼ 15% is needed to compensate for the expected lower rotation of ITER. Modeling of the steady-state scenario reproduces a strong dependence of confinement, stability, and noninductive fraction (f NI ) on q 95 , as found in the experimental I p scan, indicating that optimization of the q profile is critical to simultaneously achieving the

  2. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  3. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  4. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  5. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  6. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  7. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  8. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  9. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  10. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  11. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  12. Daemen Alternative Energy/Geothermal Technologies Demonstration Program, Erie County

    Energy Technology Data Exchange (ETDEWEB)

    Beiswanger, Robert C. [Daemen College, Amherst, NY (United States)

    2013-02-28

    The purpose of the Daemen Alternative Energy/Geothermal Technologies Demonstration Project is to demonstrate the use of geothermal technology as model for energy and environmental efficiency in heating and cooling older, highly inefficient buildings. The former Marian Library building at Daemen College is a 19,000 square foot building located in the center of campus. Through this project, the building was equipped with geothermal technology and results were disseminated. Gold LEED certification for the building was awarded. 1) How the research adds to the understanding of the area investigated. This project is primarily a demonstration project. Information about the installation is available to other companies, organizations, and higher education institutions that may be interested in using geothermal energy for heating and cooling older buildings. 2) The technical effectiveness and economic feasibility of the methods or techniques investigated or demonstrated. According to the modeling and estimates through Stantec, the energy-efficiency cost savings is estimated at 20%, or $24,000 per year. Over 20 years this represents $480,000 in unrestricted revenue available for College operations. See attached technical assistance report. 3) How the project is otherwise of benefit to the public. The Daemen College Geothermal Technologies Ground Source Heat Pumps project sets a standard for retrofitting older, highly inefficient, energy wasting and environmentally irresponsible buildings that are quite typical of many of the buildings on the campuses of regional colleges and universities. As a model, the project serves as an energy-efficient system with significant environmental advantages. Information about the energy-efficiency measures is available to other colleges and universities, organizations and companies, students, and other interested parties. The installation and renovation provided employment for 120 individuals during the award period. Through the new Center

  13. Dynamic mortar finite element method for modeling of shear rupture on frictional rough surfaces

    Science.gov (United States)

    Tal, Yuval; Hager, Bradford H.

    2017-09-01

    This paper presents a mortar-based finite element formulation for modeling the dynamics of shear rupture on rough interfaces governed by slip-weakening and rate and state (RS) friction laws, focusing on the dynamics of earthquakes. The method utilizes the dual Lagrange multipliers and the primal-dual active set strategy concepts, together with a consistent discretization and linearization of the contact forces and constraints, and the friction laws to obtain a semi-smooth Newton method. The discretization of the RS friction law involves a procedure to condense out the state variables, thus eliminating the addition of another set of unknowns into the system. Several numerical examples of shear rupture on frictional rough interfaces demonstrate the efficiency of the method and examine the effects of the different time discretization schemes on the convergence, energy conservation, and the time evolution of shear traction and slip rate.

  14. The weighted-sum-of-gray-gases model for arbitrary solution methods in radiative transfer

    International Nuclear Information System (INIS)

    Modest, M.F.

    1991-01-01

    In this paper the weighted-sum-of-gray-gases approach for radiative transfer in non-gray participating media, first developed by Hottel in the context of the zonal method, has been shown to be applicable to the general radiative equation of transfer. Within the limits of the weighted-sum-of-gray-gases model (non-scattering media within a black-walled enclosure) any non-gray radiation problem can be solved by any desired solution method after replacing the medium by an equivalent small number of gray media with constant absorption coefficients. Some examples are presented for isothermal media and media at radiative equilibrium, using the exact integral equations as well as the popular P-1 approximation of the equivalent gray media solution. The results demonstrate the equivalency of the method with the quadrature of spectral results, as well as the tremendous computer times savings (by a minimum of 95%) which are achieved

  15. Predicting Vascular Plant Diversity in Anthropogenic Peatlands: Comparison of Modeling Methods with Free Satellite Data

    Directory of Open Access Journals (Sweden)

    Ivan Castillo-Riffart

    2017-07-01

    Full Text Available Peatlands are ecosystems of great relevance, because they have an important number of ecological functions that provide many services to mankind. However, studies focusing on plant diversity, addressed from the remote sensing perspective, are still scarce in these environments. In the present study, predictions of vascular plant richness and diversity were performed in three anthropogenic peatlands on Chiloé Island, Chile, using free satellite data from the sensors OLI, ASTER, and MSI. Also, we compared the suitability of these sensors using two modeling methods: random forest (RF and the generalized linear model (GLM. As predictors for the empirical models, we used the spectral bands, vegetation indices and textural metrics. Variable importance was estimated using recursive feature elimination (RFE. Fourteen out of the 17 predictors chosen by RFE were textural metrics, demonstrating the importance of the spatial context to predict species richness and diversity. Non-significant differences were found between the algorithms; however, the GLM models often showed slightly better results than the RF. Predictions obtained by the different satellite sensors did not show significant differences; nevertheless, the best models were obtained with ASTER (richness: R2 = 0.62 and %RMSE = 17.2, diversity: R2 = 0.71 and %RMSE = 20.2, obtained with RF and GLM respectively, followed by OLI and MSI. Diversity obtained higher accuracies than richness; nonetheless, accurate predictions were achieved for both, demonstrating the potential of free satellite data for the prediction of relevant community characteristics in anthropogenic peatland ecosystems.

  16. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  17. Industry Application ECCS / LOCA Integrated Cladding/Emergency Core Cooling System Performance: Demonstration of LOTUS-Baseline Coupled Analysis of the South Texas Plant Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Szilard, Ronaldo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); Parisi, Carlo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Vaghetto, Rodolfo [Texas A & M Univ., College Station, TX (United States); Vanni, Alessandro [Texas A & M Univ., College Station, TX (United States); Neptune, Kaleb [Texas A & M Univ., College Station, TX (United States)

    2017-06-01

    Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance during LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.

  18. Advancing the US Department of Energy's Technologies through the Underground Storage Tank: Integrated Demonstration Program

    International Nuclear Information System (INIS)

    Gates, T.E.

    1993-01-01

    The principal objective of the Underground Storage Tank -- Integrated Demonstration Program is the demonstration and continued development of technologies suitable for the remediation of waste stored in underground storage tanks. The Underground Storage Tank Integrated Demonstration Program is the most complex of the integrated demonstration programs established under the management of the Office of Technology Development. The Program has the following five participating sites: Oak Ridge, Idaho, Fernald, Savannah River, and Hanford. Activities included within the Underground Storage Tank -- Integrated Demonstration are (1) characterizating radioactive and hazardous waste constituents, (2) determining the need and methodology for improving the stability of the waste form, (3) determining the performance requirements, (4) demonstrating barrier performance by instrumented field tests, natural analog studies, and modeling, (5) determining the need and method for destroying and stabilizing hazardous waste constituents, (6) developing and evaluating methods for retrieving, processing (pretreatment and treatment), and storing the waste on an interim basis, and (7) defining and evaluating waste packages, transportation options, and ultimate closure techniques including site restoration. The eventual objective is the transfer of new technologies as a system to full-scale remediation at the US Department of Energy complexes and sites in the private sector

  19. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  20. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  1. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    Science.gov (United States)

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  2. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  3. Using the integral equations method to model a 2G racetrack coil with anisotropic critical current dependence

    Science.gov (United States)

    Martins, F. G. R.; Sass, F.; Barusco, P.; Ferreira, A. C.; de Andrade, R., Jr.

    2017-11-01

    Second-generation (2G) superconducting wires have already proved their potential in several applications. These materials have a highly nonlinear behavior that turns an optimized engineering project into a challenge. Between several numerical techniques that can be used to perform this task, the integral equations (IE) method stands out for avoiding mesh problems by representing the 2G wire cross-sectional area by a line. While most applications need to be represented in a 3D geometry, the IE is limited to longitudinal or axisymmetric models. This work demonstrates that a complex 3D geometry can be modeled by several coupled simulations using the IE method. In order to prove this statement, the proposed technique was used to simulate a 2G racetrack coil considering the self-field magnitude (B) and incidence angle (θ) on the tape. The J c characteristic was modeled in terms of parallel and normal to the tape plane magnetic field components (J c(B ∥ , B ⊥)) obtained from a V-I(B, θ) characterization of a tape segment. This result was implemented using commercial software with both A-V (vector magnetic potential and scalar voltage potential) and IE coupled simulations solved by finite elements. This solution bypasses the meshing problem due to the tapes slim geometry, considering each turn a single 1D model, all magnetically interacting in two 2D models. The simulations results are in good agreement to what was both expected and observed in the literature. The simulation is compared to the measured V-I characteristic for a single pancake racetrack coil built with same geometry as its simulation models, and a theoretical study demonstrates the possibilities of the proposed tool for analyzing a racetrack coil current density and electric field behavior in each of its turns.

  4. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  5. 3D seismic modeling and reverse‐time migration with the parallel Fourier method using non‐blocking collective communications

    KAUST Repository

    Chu, Chunlei

    2009-01-01

    The major performance bottleneck of the parallel Fourier method on distributed memory systems is the network communication cost. In this study, we investigate the potential of using non‐blocking all‐to‐all communications to solve this problem by overlapping computation and communication. We present the runtime comparison of a 3D seismic modeling problem with the Fourier method using non‐blocking and blocking calls, respectively, on a Linux cluster. The data demonstrate that a performance improvement of up to 40% can be achieved by simply changing blocking all‐to‐all communication calls to non‐blocking ones to introduce the overlapping capability. A 3D reverse‐time migration result is also presented as an extension to the modeling work based on non‐blocking collective communications.

  6. Background field method in gauge theories and on linear sigma models

    International Nuclear Information System (INIS)

    van de Ven, A.E.M.

    1986-01-01

    This dissertation constitutes a study of the ultraviolet behavior of gauge theories and two-dimensional nonlinear sigma-models by means of the background field method. After a general introduction in chapter 1, chapter 2 presents algorithms which generate the divergent terms in the effective action at one-loop for arbitrary quantum field theories in flat spacetime of dimension d ≤ 11. It is demonstrated that global N = 1 supersymmetric Yang-Mills theory in six dimensions in one-loop UV-finite. Chapter 3 presents an algorithm which produces the divergent terms in the effective action at two-loops for renormalizable quantum field theories in a curved four-dimensional background spacetime. Chapter 4 presents a study of the two-loop UV-behavior of two-dimensional bosonic and supersymmetric non-linear sigma-models which include a Wess-Zumino-Witten term. It is found that, to this order, supersymmetric models on quasi-Ricci flat spaces are UV-finite and the β-functions for the bosonic model depend only on torsionful curvatures. Chapter 5 summarizes a superspace calculation of the four-loop β-function for two-dimensional N = 1 and N = 2 supersymmetric non-linear sigma-models. It is found that besides the one-loop contribution which vanishes on Ricci-flat spaces, the β-function receives four-loop contributions which do not vanish in the Ricci-flat case. Implications for superstrings are discussed. Chapters 6 and 7 treat the details of these calculations

  7. Methods for demonstration of enzyme activity in muscle fibres at the muscle/bone interface in demineralized tissue

    DEFF Research Database (Denmark)

    Kirkeby, S; Vilmann, H

    1981-01-01

    A method for demonstration of activity for ATPase and various oxidative enzymes (succinic dehydrogenase, alpha-glycerophosphate dehydrogenase, and lactic dehydrogenase) in muscle/bone sections of fixed and demineralized tissue has been developed. It was found that it is possible to preserve...... considerable amounts of the above mentioned enzymes in the muscle fibres at the muscle/bone interfaces. The best results were obtained after 20 min fixation, and 2-3 weeks of storage in MgNa2EDTA containing media. As the same technique previously has been used to describe patterns of resorption and deposition...

  8. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  9. Seeing the wood for the trees: a forest of methods for optimization and omic-network integration in metabolic modelling.

    Science.gov (United States)

    Vijayakumar, Supreeta; Conway, Max; Lió, Pietro; Angione, Claudio

    2017-05-30

    Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a 'forest' of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridize and become more complex. We then provide the first hands-on tutorial for multi-objective optimization of metabolic models in R. We finally discuss the implementation of multi-view machine learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimization of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning and a step-by-step R tutorial should be of interest for both the beginner and the advanced user. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Modeling nest-survival data: a comparison of recently developed methods that can be implemented in MARK and SAS

    Directory of Open Access Journals (Sweden)

    Rotella, J. J.

    2004-06-01

    Full Text Available Estimating nest success and evaluating factors potentially related to the survival rates of nests are key aspects of many studies of avian populations. A strong interest in nest success has led to a rich literature detailing a variety of estimation methods for this vital rate. In recent years, modeling approaches have undergone especially rapid development. Despite these advances, most researchers still employ Mayfield’s ad-hoc method (Mayfield, 1961 or, in some cases, the maximum-likelihood estimator of Johnson (1979 and Bart & Robson (1982. Such methods permit analyses of stratified data but do not allow for more complex and realistic models of nest survival rate that include covariates that vary by individual, nest age, time, etc. and that may be continuous or categorical. Methods that allow researchers to rigorously assess the importance of a variety of biological factors that might affect nest survival rates can now be readily implemented in Program MARK and in SAS’s Proc GENMOD and Proc NLMIXED. Accordingly, use of Mayfield’s estimator without first evaluating the need for more complex models of nest survival rate cannot be justified. With the goal of increasing the use of more flexible methods, we first describe the likelihood used for these models and then consider the question of what the effective sample size is for computation of AICc. Next, we consider the advantages and disadvantages of these different programs in terms of ease of data input and model construction; utility/flexibility of generated estimates and predictions; ease of model selection; and ability to estimate variance components. An example data set is then analyzed using both MARK and SAS to demonstrate implementation of the methods with various models that contain nest-, group- (or block-, and time-specific covariates. Finally, we discuss improvements that would, if they became available, promote a better general understanding of nest survival rates.

  11. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  12. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  13. Field-theoretic methods in strongly-coupled models of general gauge mediation

    International Nuclear Information System (INIS)

    Fortin, Jean-François; Stergiou, Andreas

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current–current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split

  14. Statistical Methods for the Qualitative Assessment of Dynamic Models with Time Delay (R Package qualV

    Directory of Open Access Journals (Sweden)

    Stefanie Jachner

    2007-06-01

    Full Text Available Results of ecological models differ, to some extent, more from measured data than from empirical knowledge. Existing techniques for validation based on quantitative assessments sometimes cause an underestimation of the performance of models due to time shifts, accelerations and delays or systematic differences between measurement and simulation. However, for the application of such models it is often more important to reproduce essential patterns instead of seemingly exact numerical values. This paper presents techniques to identify patterns and numerical methods to measure the consistency of patterns between observations and model results. An orthogonal set of deviance measures for absolute, relative and ordinal scale was compiled to provide informations about the type of difference. Furthermore, two different approaches accounting for time shifts were presented. The first one transforms the time to take time delays and speed differences into account. The second one describes known qualitative criteria dividing time series into interval units in accordance to their main features. The methods differ in their basic concepts and in the form of the resulting criteria. Both approaches and the deviance measures discussed are implemented in an R package. All methods are demonstrated by means of water quality measurements and simulation data. The proposed quality criteria allow to recognize systematic differences and time shifts between time series and to conclude about the quantitative and qualitative similarity of patterns.

  15. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  16. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  17. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  18. Three-dimensional modeling of subsurface contamination: A case study from the radio frequency-heating demonstration at the Savannah River Site

    International Nuclear Information System (INIS)

    Poppy, S.P.; Eddy-Dilek, C.A.; Jarosch, T.R.

    1994-01-01

    Computer based three-dimensional modeling is a powerful tool used for visualizing and interpreting environmental data collected at the Savannah River Site (SRS). Three-dimensional modeling was used to image and interpret subsurface spatial data, primarily, changes in the movement, the accumulation, and the depletion of contaminants at the Integrated Demonstration Site (IDS), a proving ground for experimental environmental remediation technologies. Three-dimensional models are also educational tools, relaying complex environmental data to interested non-technical individuals who may be unfamiliar with the concepts and terminology involved in environmental studies. The public can draw their own conclusions of the success of the experiments after viewing the three-dimensional images set up in a chronological order. The three-dimensional grids generated during these studies can also be used to create images for visualization and animated sequences that model contamination movement. Animation puts the images of contamination distribution in motion and results in a new perspective on the effects of the remedial demonstration

  19. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  20. A survey of real face modeling methods

    Science.gov (United States)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  1. Parents as Teachers Health Literacy Demonstration project: integrating an empowerment model of health literacy promotion into home-based parent education.

    Science.gov (United States)

    Carroll, Lauren N; Smith, Sandra A; Thomson, Nicole R

    2015-03-01

    The Parents as Teachers (PAT) Health Literacy Demonstration project assessed the impact of integrating data-driven reflective practices into the PAT home visitation model to promote maternal health literacy. PAT is a federally approved Maternal, Infant, Early Childhood Home Visiting program with the goal of promoting school readiness and healthy child development. This 2-year demonstration project used an open-cohort longitudinal design to promote parents' interactive and reflective skills, enhance health education, and provide direct assistance to personalize and act on information by integrating an empowerment paradigm into PAT's parent education model. Eight parent educators used the Life Skills Progression instrument to tailor the intervention to each of 103 parent-child dyads. Repeated-measures analysis of variance, paired t tests, and logistic regression combined with qualitative data demonstrated that mothers achieved overall significant improvements in health literacy, and that home visitors are important catalysts for these improvements. These findings support the use of an empowerment model of health education, skill building, and direct information support to enable parents to better manage personal and child health and health care. © 2014 Society for Public Health Education.

  2. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  3. Nonperturbative stochastic method for driven spin-boson model

    Science.gov (United States)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  4. Generalized framework for context-specific metabolic model extraction methods

    Directory of Open Access Journals (Sweden)

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  5. Human In Silico Drug Trials Demonstrate Higher Accuracy than Animal Models in Predicting Clinical Pro-Arrhythmic Cardiotoxicity.

    Science.gov (United States)

    Passini, Elisa; Britton, Oliver J; Lu, Hua Rong; Rohrbacher, Jutta; Hermans, An N; Gallacher, David J; Greig, Robert J H; Bueno-Orovio, Alfonso; Rodriguez, Blanca

    2017-01-01

    Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP) models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC 50 /Hill coefficient). Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca 2+ -transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca 2+ /late Na + currents and Na + /Ca 2+ -exchanger, reduced Na + /K + -pump) are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density

  6. Human In Silico Drug Trials Demonstrate Higher Accuracy than Animal Models in Predicting Clinical Pro-Arrhythmic Cardiotoxicity

    Directory of Open Access Journals (Sweden)

    Elisa Passini

    2017-09-01

    Full Text Available Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC50/Hill coefficient. Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca2+-transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs. Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca2+/late Na+ currents and Na+/Ca2+-exchanger, reduced Na+/K+-pump are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density

  7. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  8. Robust modelling of solubility in supercritical carbon dioxide using Bayesian methods.

    Science.gov (United States)

    Tarasova, Anna; Burden, Frank; Gasteiger, Johann; Winkler, David A

    2010-04-01

    Two sparse Bayesian methods were used to derive predictive models of solubility of organic dyes and polycyclic aromatic compounds in supercritical carbon dioxide (scCO(2)), over a wide range of temperatures (285.9-423.2K) and pressures (60-1400 bar): a multiple linear regression employing an expectation maximization algorithm and a sparse prior (MLREM) method and a non-linear Bayesian Regularized Artificial Neural Network with a Laplacian Prior (BRANNLP). A randomly selected test set was used to estimate the predictive ability of the models. The MLREM method resulted in a model of similar predictivity to the less sparse MLR method, while the non-linear BRANNLP method created models of substantially better predictivity than either the MLREM or MLR based models. The BRANNLP method simultaneously generated context-relevant subsets of descriptors and a robust, non-linear quantitative structure-property relationship (QSPR) model for the compound solubility in scCO(2). The differences between linear and non-linear descriptor selection methods are discussed. (c) 2009 Elsevier Inc. All rights reserved.

  9. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  10. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    Science.gov (United States)

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    . The study also demonstrates that with excellent models of climatic forecasts readily available, using this method one can predict the disease incidence at long forecasting horizons, with high degree of efficiency and based on such technique a useful early warning system can be developed region wise or nation wise for disease prevention and control activities.

  12. Mathematical Model Taking into Account Nonlocal Effects of Plasmonic Structures on the Basis of the Discrete Source Method

    Science.gov (United States)

    Eremin, Yu. A.; Sveshnikov, A. G.

    2018-04-01

    The discrete source method is used to develop and implement a mathematical model for solving the problem of scattering electromagnetic waves by a three-dimensional plasmonic scatterer with nonlocal effects taken into account. Numerical results are presented whereby the features of the scattering properties of plasmonic particles with allowance for nonlocal effects are demonstrated depending on the direction and polarization of the incident wave.

  13. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  14. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  15. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  16. Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering

    1996-05-01

    Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.

  17. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series

    Science.gov (United States)

    Patel, Ameera X.; Kundu, Prantik; Rubinov, Mikail; Jones, P. Simon; Vértes, Petra E.; Ersche, Karen D.; Suckling, John; Bullmore, Edward T.

    2014-01-01

    The impact of in-scanner head movement on functional magnetic resonance imaging (fMRI) signals has long been established as undesirable. These effects have been traditionally corrected by methods such as linear regression of head movement parameters. However, a number of recent independent studies have demonstrated that these techniques are insufficient to remove motion confounds, and that even small movements can spuriously bias estimates of functional connectivity. Here we propose a new data-driven, spatially-adaptive, wavelet-based method for identifying, modeling, and removing non-stationary events in fMRI time series, caused by head movement, without the need for data scrubbing. This method involves the addition of just one extra step, the Wavelet Despike, in standard pre-processing pipelines. With this method, we demonstrate robust removal of a range of different motion artifacts and motion-related biases including distance-dependent connectivity artifacts, at a group and single-subject level, using a range of previously published and new diagnostic measures. The Wavelet Despike is able to accommodate the substantial spatial and temporal heterogeneity of motion artifacts and can consequently remove a range of high and low frequency artifacts from fMRI time series, that may be linearly or non-linearly related to physical movements. Our methods are demonstrated by the analysis of three cohorts of resting-state fMRI data, including two high-motion datasets: a previously published dataset on children (N = 22) and a new dataset on adults with stimulant drug dependence (N = 40). We conclude that there is a real risk of motion-related bias in connectivity analysis of fMRI data, but that this risk is generally manageable, by effective time series denoising strategies designed to attenuate synchronized signal transients induced by abrupt head movements. The Wavelet Despiking software described in this article is freely available for download at www

  18. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  19. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  20. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  1. Application of homotopy-perturbation method to nonlinear population dynamics models

    International Nuclear Information System (INIS)

    Chowdhury, M.S.H.; Hashim, I.; Abdulaziz, O.

    2007-01-01

    In this Letter, the homotopy-perturbation method (HPM) is employed to derive approximate series solutions of nonlinear population dynamics models. The nonlinear models considered are the multispecies Lotka-Volterra equations. The accuracy of this method is examined by comparison with the available exact and the fourth-order Runge-Kutta method (RK4)

  2. A meshless method for modeling convective heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David B [Los Alamos National Laboratory

    2010-01-01

    A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.

  3. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  4. CT-Sellink - a new method for demonstrating the gut wall

    International Nuclear Information System (INIS)

    Thiele, J.; Kloeppel, R.; Schulz, H.G.

    1993-01-01

    34 patients were examined by CT following a modified enema (CT-Sellink) in order to demonstrate the gut. By introducing a 'gut index' it is possible to define the tone of the gut providing its folds remain constant. By means of a radial density profile the gut wall can be defined objectively and in numerical terms. Gut wall thickness in the small bowel averaged 1.2 mm with a density of 51 Hu and gut wall thickness in the colon averaged 2 mm with a density of 59 Hu. (orig.) [de

  5. MMB-GUI: a fast morphing method demonstrates a possible ribosomal tRNA translocation trajectory.

    Science.gov (United States)

    Tek, Alex; Korostelev, Andrei A; Flores, Samuel Coulbourn

    2016-01-08

    Easy-to-use macromolecular viewers, such as UCSF Chimera, are a standard tool in structural biology. They allow rendering and performing geometric operations on large complexes, such as viruses and ribosomes. Dynamical simulation codes enable modeling of conformational changes, but may require considerable time and many CPUs. There is an unmet demand from structural and molecular biologists for software in the middle ground, which would allow visualization combined with quick and interactive modeling of conformational changes, even of large complexes. This motivates MMB-GUI. MMB uses an internal-coordinate, multiscale approach, yielding as much as a 2000-fold speedup over conventional simulation methods. We use Chimera as an interactive graphical interface to control MMB. We show how this can be used for morphing of macromolecules that can be heterogeneous in biopolymer type, sequence, and chain count, accurately recapitulating structural intermediates. We use MMB-GUI to create a possible trajectory of EF-G mediated gate-passing translocation in the ribosome, with all-atom structures. This shows that the GUI makes modeling of large macromolecules accessible to a wide audience. The morph highlights similarities in tRNA conformational changes as tRNA translocates from A to P and from P to E sites and suggests that tRNA flexibility is critical for translocation completion. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. ELISA, a demonstrator environment for information systems architecture design

    Science.gov (United States)

    Panem, Chantal

    1994-01-01

    This paper describes an approach of reusability of software engineering technology in the area of ground space system design. System engineers have lots of needs similar to software developers: sharing of a common data base, capitalization of knowledge, definition of a common design process, communication between different technical domains. Moreover system designers need to simulate dynamically their system as early as possible. Software development environments, methods and tools now become operational and widely used. Their architecture is based on a unique object base, a set of common management services and they host a family of tools for each life cycle activity. In late '92, CNES decided to develop a demonstrative software environment supporting some system activities. The design of ground space data processing systems was chosen as the application domain. ELISA (Integrated Software Environment for Architectures Specification) was specified as a 'demonstrator', i.e. a sufficient basis for demonstrations, evaluation and future operational enhancements. A process with three phases was implemented: system requirements definition, design of system architectures models, and selection of physical architectures. Each phase is composed of several activities that can be performed in parallel, with the provision of Commercial Off the Shelves Tools. ELISA has been delivered to CNES in January 94, currently used for demonstrations and evaluations on real projects (e.g. SPOT4 Satellite Control Center). It is on the way of new evolutions.

  7. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Science.gov (United States)

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  8. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    Science.gov (United States)

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  9. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  10. Classroom Demonstrations in Materials Science/Engineering.

    Science.gov (United States)

    Hirschhorn, J. S.; And Others

    Examples are given of demonstrations used at the University of Wisconsin in a materials science course for nontechnical students. Topics include crystal models, thermal properties, light, and corrosion. (MLH)

  11. Predicting Rehabilitation Success Rate Trends among Ethnic Minorities Served by State Vocational Rehabilitation Agencies: A National Time Series Forecast Model Demonstration Study

    Science.gov (United States)

    Moore, Corey L.; Wang, Ningning; Washington, Janique Tynez

    2017-01-01

    Purpose: This study assessed and demonstrated the efficacy of two select empirical forecast models (i.e., autoregressive integrated moving average [ARIMA] model vs. grey model [GM]) in accurately predicting state vocational rehabilitation agency (SVRA) rehabilitation success rate trends across six different racial and ethnic population cohorts…

  12. The longitudinal epineural incision and complete nerve transection method for modeling sciatic nerve injury

    Directory of Open Access Journals (Sweden)

    Xing-long Cheng

    2015-01-01

    Full Text Available Injury severity, operative technique and nerve regeneration are important factors to consider when constructing a model of peripheral nerve injury. Here, we present a novel peripheral nerve injury model and compare it with the complete sciatic nerve transection method. In the experimental group, under a microscope, a 3-mm longitudinal incision was made in the epineurium of the sciatic nerve to reveal the nerve fibers, which were then transected. The small, longitudinal incision in the epineurium was then sutured closed, requiring no stump anastomosis. In the control group, the sciatic nerve was completely transected, and the epineurium was repaired by anastomosis. At 2 and 4 weeks after surgery, Wallerian degeneration was observed in both groups. In the experimental group, at 8 and 12 weeks after surgery, distinct medullary nerve fibers and axons were observed in the injured sciatic nerve. Regular, dense myelin sheaths were visible, as well as some scarring. By 12 weeks, the myelin sheaths were normal and intact, and a tight lamellar structure was observed. Functionally, limb movement and nerve conduction recovered in the injured region between 4 and 12 weeks. The present results demonstrate that longitudinal epineural incision with nerve transection can stably replicate a model of Sunderland grade IV peripheral nerve injury. Compared with the complete sciatic nerve transection model, our method reduced the difficulties of micromanipulation and surgery time, and resulted in good stump restoration, nerve regeneration, and functional recovery.

  13. Hierarchical multiscale modeling for flows in fractured media using generalized multiscale finite element method

    KAUST Repository

    Efendiev, Yalchin R.

    2015-06-05

    In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.

  14. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  15. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  16. BioLab: Using Yeast Fermentation as a Model for the Scientific Method.

    Science.gov (United States)

    Pigage, Helen K.; Neilson, Milton C.; Greeder, Michele M.

    This document presents a science experiment demonstrating the scientific method. The experiment consists of testing the fermentation capabilities of yeasts under different circumstances. The experiment is supported with computer software called BioLab which demonstrates yeast's response to different environments. (YDS)

  17. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    Science.gov (United States)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  18. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  19. Single-shot spiral imaging enabled by an expanded encoding model: Demonstration in diffusion MRI.

    Science.gov (United States)

    Wilm, Bertram J; Barmet, Christoph; Gross, Simon; Kasper, Lars; Vannesjo, S Johanna; Haeberlin, Max; Dietrich, Benjamin E; Brunner, David O; Schmid, Thomas; Pruessmann, Klaas P

    2017-01-01

    The purpose of this work was to improve the quality of single-shot spiral MRI and demonstrate its application for diffusion-weighted imaging. Image formation is based on an expanded encoding model that accounts for dynamic magnetic fields up to third order in space, nonuniform static B 0 , and coil sensitivity encoding. The encoding model is determined by B 0 mapping, sensitivity mapping, and concurrent field monitoring. Reconstruction is performed by iterative inversion of the expanded signal equations. Diffusion-tensor imaging with single-shot spiral readouts is performed in a phantom and in vivo, using a clinical 3T instrument. Image quality is assessed in terms of artefact levels, image congruence, and the influence of the different encoding factors. Using the full encoding model, diffusion-weighted single-shot spiral imaging of high quality is accomplished both in vitro and in vivo. Accounting for actual field dynamics, including higher orders, is found to be critical to suppress blurring, aliasing, and distortion. Enhanced image congruence permitted data fusion and diffusion tensor analysis without coregistration. Use of an expanded signal model largely overcomes the traditional vulnerability of spiral imaging with long readouts. It renders single-shot spirals competitive with echo-planar readouts and thus deploys shorter echo times and superior readout efficiency for diffusion imaging and further prospective applications. Magn Reson Med 77:83-91, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  1. TWO NOVEL ACM (ACTIVE CONTOUR MODEL) METHODS FOR INTRAVASCULAR ULTRASOUND IMAGE SEGMENTATION

    International Nuclear Information System (INIS)

    Chen, Chi Hau; Potdat, Labhesh; Chittineni, Rakesh

    2010-01-01

    One of the attractive image segmentation methods is the Active Contour Model (ACM) which has been widely used in medical imaging as it always produces sub-regions with continuous boundaries. Intravascular ultrasound (IVUS) is a catheter based medical imaging technique which is used for quantitative assessment of atherosclerotic disease. Two methods of ACM realizations are presented in this paper. The gradient descent flow based on minimizing energy functional can be used for segmentation of IVUS images. However this local operation alone may not be adequate to work with the complex IVUS images. The first method presented consists of basically combining the local geodesic active contours and global region-based active contours. The advantage of combining the local and global operations is to allow curves deforming under the energy to find only significant local minima and delineate object borders despite noise, poor edge information and heterogeneous intensity profiles. Results for this algorithm are compared to standard techniques to demonstrate the method's robustness and accuracy. In the second method, the energy function is appropriately modified and minimized using a Hopfield neural network. Proper modifications in the definition of the bias of the neurons have been introduced to incorporate image characteristics. The method overcomes distortions in the expected image pattern, such as due to the presence of calcium, and employs a specialized structure of the neural network and boundary correction schemes which are based on a priori knowledge about the vessel geometry. The presented method is very fast and has been evaluated using sequences of IVUS frames.

  2. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

    Science.gov (United States)

    Harmon, Michael; Gamba, Irene M.; Ren, Kui

    2016-12-01

    This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

  3. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  4. Evaluation of two updating methods for dissipative models on a real structure

    International Nuclear Information System (INIS)

    Moine, P.; Billet, L.

    1996-01-01

    Finite Element Models are widely used to predict the dynamic behaviour from structures. Frequently, the model does not represent the structure with all be expected accuracy i.e. the measurements realised on the structure differ from the data predicted by the model. It is therefore necessary to update the model. Although many modeling errors come from inadequate representation of the damping phenomena, most of the model updating techniques are up to now restricted to conservative models only. In this paper, we present two updating methods for dissipative models using Eigen mode shapes and Eigen values as behavioural information from the structure. The first method - the modal output error method - compares directly the experimental Eigen vectors and Eigen values to the model Eigen vectors and Eigen values whereas the second method - the error in constitutive relation method - uses an energy error derived from the equilibrium relation. The error function, in both cases, is minimized by a conjugate gradient algorithm and the gradient is calculated analytically. These two methods behave differently which can be evidenced by updating a real structure constituted from a piece of pipe mounted on two viscous elastic suspensions. The updating of the model validates an updating strategy consisting in realizing a preliminary updating with the error in constitutive relation method (a fast to converge but difficult to control method) and then to pursue the updating with the modal output error method (a slow to converge but reliable and easy to control method). Moreover the problems encountered during the updating process and their corresponding solutions are given. (authors)

  5. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  6. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  7. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  8. Learning and Correcting Robot Trajectory Keypoints from a Single Demonstration

    DEFF Research Database (Denmark)

    Juan, Iñigo Iturrate San; Østergaard, Esben Hallundbæk; Rytter, Martin

    2017-01-01

    of a trajectory from a single demonstration. Additionally, by utilizing velocity information in the task space, the method is able to achieve a level of precision that is sufficient for industrial assembly tasks. Along with this, we present a user study that shows that our method enables non-expert robot users......Kinesthetic teaching provides an accessible way for non-experts to quickly and easily program a robot system by demonstration. A crucial aspect of this technique is to obtain an accurate approximation of the robot’s intended trajectory for the task, while filtering out spurious aspects...... of the demonstration. While several methods to this end have been proposed, they either rely on several demonstrations or on the user explicitly indicating relevant trajectory waypoints. We propose a method, based on the Douglas-Peucker line simplification algorithm that is able to extract the notable points...

  9. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    Science.gov (United States)

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  10. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  11. A demonstrated method for upgrading existing control room interiors

    International Nuclear Information System (INIS)

    Brice, R.M.; Terrill, D.; Brice, R.M.

    1991-01-01

    The main control room (MCR) of any nuclear power plant can justifiably be called the most important area staffed by personnel in the entire facility. The interior workstation configuration, equipment arrangement, and staff placement all affect the efficiency and habitability of the room. There are many guidelines available that describe various human factor principles to use when upgrading the environment of the MCR. These involve anthropometric standards and rules for placement of peripheral equipment. Due to the variations in plant design, however, hard-and-fast rules have not and cannot be standardized for retrofits in any significant way. How then does one develop criteria for the improvement of a MCR? The purpose of this paper is to discuss, from the designer's point of view, a method for the collection of information, development of criteria, and creation of a final design for a MCR upgrade. This method is best understood by describing the successful implementation at Tennessee Valley Authority's Sequoyah nuclear plant

  12. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  13. Extending existing structural identifiability analysis methods to mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  15. Overview of statistical methods, models and analysis for predicting equipment end of life

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-07-01

    Utility equipment can be operated and maintained for many years following installation. However, as the equipment ages, utility operators must decide whether to extend the service life or replace the equipment. Condition assessment modelling is used by many utilities to determine the condition of equipment and to prioritize the maintenance or repair. Several factors are weighted and combined in assessment modelling, which gives a single index number to rate the equipment. There is speculation that this index alone may not be adequate for a business case to rework or replace an asset because it only ranks an asset into a particular category. For that reason, a new methodology was developed to determine the economic end of life of an asset. This paper described the different statistical methods available and their use in determining the remaining service life of electrical equipment. A newly developed Excel-based demonstration computer tool is also an integral part of the deliverables of this project.

  16. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  17. Test Plan for the overburden removal demonstration

    International Nuclear Information System (INIS)

    Rice, P.; Thompson, D.; Winberg, M.; Skaggs, J.

    1993-06-01

    The removal of soil overburdens from contaminated pits and trenches involves using equipment that will remove a small layer of soil from 3 to 6 in. at any time. As a layer of soil is removed, overburden characterization techniques perform surveys to a depth that exceeds each overburden removal layer to ensure that the removed soil will be free of contamination. It is generally expected that no contamination will be found in the soil overburden, which was brought in after the waste was put in place. It is anticipated that some containers in the waste zone have lost their integrity, and the waste leakage from those containers has migrated by gravity downward into the waste zone. To maintain a safe work environment, this method of overburden removal should allow safe preparation of a pit or trench for final remediation. To demonstrate the soil overburden techniques, the Buried Waste Integrated Demonstration Program has contracted vendor services to provide equipment and techniques demonstrating soil overburden removal technology. The demonstration will include tests that will evaluate equipment performance and techniques for removal of overburden soil, control of contamination spread, and dust control. To evaluate the performance of these techniques, air particulate samples, physical measurements of the excavation soil cuts, maneuverability measurements, and time versus volume (rate) of soil removal data will be collected during removal operations. To provide a medium for sample evaluation, the overburden will be spiked at specific locations and depths with rare earth tracers. This test plan will be describe the objectives of the demonstration, data quality objectives, methods to be used to operate the equipment and use the techniques in the test area, and methods to be used in collecting data during the demonstration

  18. A nationwide survey of patient centered medical home demonstration projects.

    Science.gov (United States)

    Bitton, Asaf; Martin, Carina; Landon, Bruce E

    2010-06-01

    The patient centered medical home has received considerable attention as a potential way to improve primary care quality and limit cost growth. Little information exists that systematically compares PCMH pilot projects across the country. Cross-sectional key-informant interviews. Leaders from existing PCMH demonstration projects with external payment reform. We used a semi-structured interview tool with the following domains: project history, organization and participants, practice requirements and selection process, medical home recognition, payment structure, practice transformation, and evaluation design. A total of 26 demonstrations in 18 states were interviewed. Current demonstrations include over 14,000 physicians caring for nearly 5 million patients. A majority of demonstrations are single payer, and most utilize a three component payment model (traditional fee for service, per person per month fixed payments, and bonus performance payments). The median incremental revenue per physician per year was $22,834 (range $720 to $91,146). Two major practice transformation models were identified--consultative and implementation of the chronic care model. A majority of demonstrations did not have well-developed evaluation plans. Current PCMH demonstration projects with external payment reform include large numbers of patients and physicians as well as a wide spectrum of implementation models. Key questions exist around the adequacy of current payment mechanisms and evaluation plans as public and policy interest in the PCMH model grows.

  19. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...

  20. Acceptance of Addiction Prevention Exiting Methods and Presentation of Appropriate Model

    Directory of Open Access Journals (Sweden)

    Ali Asghar Savad-Kouhi

    2006-10-01

    Full Text Available Objective: The aim of this study is assessment of acceptance of addiction prevention existing methods and design and present of appropriate model. Materials & Methods: This research has done by survey and desariptive method by using questionnaire we assessed knowledge and belief of people about suggesting and existing methods of addiction prevention and their acceptence and finally design new and appropriate model of addiction prevention. For designing questionnaire, first exports and professors were openly interviewed and according their views final questionnaire was planned. We used questionnaire with 2 open ended and 61 close-ended tests for gathering data. The subjects of research were 2500 persons 13-35 years old that were selected by randomized sampling from 15 provinces. Results: The findings showed that according to people who were studied, they have positive beliefs about prevention methods and their effectiveness. According to findings a good model is inclusive model that able to do in four level: knowledge, change believe and attitude, control and change behavior. Conclusion: The people of study belive that acceptance of suggesting and existing methods of addiction prevention are effective direct and indirect to others, and appropriate model is inclusive model.

  1. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    Science.gov (United States)

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the

  2. An evaluation of collision models in the Method of Moments for rarefied gas problems

    Science.gov (United States)

    Emerson, David; Gu, Xiao-Jun

    2014-11-01

    The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.

  3. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  4. Demonstration of a modelling-based multi-criteria decision analysis procedure for prioritisation of occupational risks from manufactured nanomaterials.

    Science.gov (United States)

    Hristozov, Danail; Zabeo, Alex; Alstrup Jensen, Keld; Gottardo, Stefania; Isigonis, Panagiotis; Maccalman, Laura; Critto, Andrea; Marcomini, Antonio

    2016-11-01

    Several tools to facilitate the risk assessment and management of manufactured nanomaterials (MN) have been developed. Most of them require input data on physicochemical properties, toxicity and scenario-specific exposure information. However, such data are yet not readily available, and tools that can handle data gaps in a structured way to ensure transparent risk analysis for industrial and regulatory decision making are needed. This paper proposes such a quantitative risk prioritisation tool, based on a multi-criteria decision analysis algorithm, which combines advanced exposure and dose-response modelling to calculate margins of exposure (MoE) for a number of MN in order to rank their occupational risks. We demonstrated the tool in a number of workplace exposure scenarios (ES) involving the production and handling of nanoscale titanium dioxide, zinc oxide (ZnO), silver and multi-walled carbon nanotubes. The results of this application demonstrated that bag/bin filling, manual un/loading and dumping of large amounts of dry powders led to high emissions, which resulted in high risk associated with these ES. The ZnO MN revealed considerable hazard potential in vivo, which significantly influenced the risk prioritisation results. In order to study how variations in the input data affect our results, we performed probabilistic Monte Carlo sensitivity/uncertainty analysis, which demonstrated that the performance of the proposed model is stable against changes in the exposure and hazard input variables.

  5. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  6. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  7. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  8. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    Science.gov (United States)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  9. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  10. The Langevin method and Hubbard-like models

    International Nuclear Information System (INIS)

    Gross, M.; Hamber, H.

    1989-01-01

    The authors reexamine the difficulties associated with application of the Langevin method to numerical simulation of models with non-positive definite statistical weights, including the Hubbard model. They show how to avoid the violent crossing of the zeroes of the weight and how to move those nodes away from the real axis. However, it still appears necessary to keep track of the sign (or phase) of the weight

  11. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  12. A method to couple HEM and HRM two-phase flow models

    Energy Technology Data Exchange (ETDEWEB)

    Herard, J.M.; Hurisse, O. [Elect France, Div Rech and Dev, Dept Mecan Fluides Energies and Environm, F-78401 Chatou (France); Hurisse, O. [Univ Aix Marseille 1, Ctr Math and Informat, Lab Anal Topol and Probabil, CNRS, UMR 6632, F-13453 Marseille 13 (France); Ambroso, A. [CEA Saclay, DEN, DM2S, SFME, LETR, 91 - Gif sur Yvette (France)

    2009-04-15

    We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)

  13. A method to couple HEM and HRM two-phase flow models

    International Nuclear Information System (INIS)

    Herard, J.M.; Hurisse, O.; Hurisse, O.; Ambroso, A.

    2009-01-01

    We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)

  14. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  15. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  16. Method for modeling social care processes for national information exchange.

    Science.gov (United States)

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  17. Solving the nuclear shell model with an algebraic method

    International Nuclear Information System (INIS)

    Feng, D.H.; Pan, X.W.; Guidry, M.

    1997-01-01

    We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)

  18. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  19. Numerical Modelling of the Special Light Source with Novel R-FEM Method

    Directory of Open Access Journals (Sweden)

    Pavel Fiala

    2008-01-01

    Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.

  20. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  1. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  2. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  3. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  4. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    Science.gov (United States)

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. Method for modeling post-mortem biometric 3D fingerprints

    Science.gov (United States)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.

    2016-05-01

    Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.

  6. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  7. Unconditionally stable methods for simulating multi-component two-phase interface models with Peng-Robinson equation of state and various boundary conditions

    KAUST Repository

    Kou, Jisheng

    2015-03-01

    In this paper, we consider multi-component dynamic two-phase interface models, which are formulated by the Cahn-Hilliard system with Peng-Robinson equation of state and various boundary conditions. These models can be derived from the minimum problems of Helmholtz free energy or grand potential in the realistic thermodynamic systems. The resulted Cahn-Hilliard systems with various boundary conditions are fully coupled and strongly nonlinear. A linear transformation is introduced to decouple the relations between different components, and as a result, the models are simplified. From this, we further propose a semi-implicit unconditionally stable time discretization scheme, which allows us to solve the Cahn-Hilliard system by a decoupled way, and thus, our method can significantly reduce the computational cost and memory requirements. The mixed finite element methods are employed for the spatial discretization, and the approximate errors are also analyzed for both space and time. Numerical examples are tested to demonstrate the efficiency of our proposed methods. © 2015 Elsevier B.V.

  8. A method to investigate the biomechanical alterations in Perthes’ disease by hip joint contact modeling

    DEFF Research Database (Denmark)

    Salmingo, Remel A.; Skytte, Tina Lercke; Traberg, Marie Sand

    2017-01-01

    for the preoperative planning to obtain stress relief for the highly stressed areas in the malformed hip. This single-patient study demonstrated that the biomechanical alterations in Perthes’ disease can be evaluated individually by patient-specific finite element contact modeling using MRI. A multi-patient study...... was to develop a method to investigate the biomechanical alterations in Perthes’ disease by finite element (FE ) contact modeling using MRI. The MRI data of a unilateral Perthes’ case was obtained to develop the three-dimensional FE model of the hip joint. The stress and contact pressure patterns...... in the unaffected hip were well distrib uted. Elevated concentrations of stress and contact pressure were found in the Perthes’ hip. The highest femoral cartilagev on Mises stress 3.9 MPa and contact pressure 5.3 M P a were found in the Perthes’ hip, whereas 2.4 M P a and 4.9 MP a in the healthy hip, respectively...

  9. A novel approach to delayed-start analyses for demonstrating disease-modifying effects in Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Hong Liu-Seifert

    Full Text Available One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer's disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been

  10. Fuzzy cross-model cross-mode method and its application to update the finite element model of structures

    International Nuclear Information System (INIS)

    Liu Yang; Xu Dejian; Li Yan; Duan Zhongdong

    2011-01-01

    As a novel updating technique, cross-model cross-mode (CMCM) method possesses a high efficiency and capability of flexible selecting updating parameters. However, the success of this method depends on the accuracy of measured modal shapes. Usually, the measured modal shapes are inaccurate since many kinds of measured noises are inevitable. Furthermore, the complete testing modal shapes are required by CMCM method so that the calculating errors may be introduced into the measured modal shapes by conducting the modal expansion or model reduction technique. Therefore, this algorithm is faced with the challenge of updating the finite element (FE) model of practical complex structures. In this study, the fuzzy CMCM method is proposed in order to weaken the effect of errors of the measured modal shapes on the updated results. Then two simulated examples are applied to compare the performance of the fuzzy CMCM method with the CMCM method. The test results show that proposed method is more promising to update the FE model of practical structures than CMCM method.

  11. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  12. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  13. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  14. Demonstration test for transporting vitrified high-level radioactive wastes

    International Nuclear Information System (INIS)

    Ito, C.; Kato, Y.; Kato, O.

    1993-01-01

    The purpose of this study was to demonstrate the integrity of the cask against a 0.3-m free-drop test and to confirm the drop-test analytical method. 1. Test cask; The cask used in the drop test is characterized structurally as follows. (1) The Cask body is covered with a neutron absorber covered with a thin steel plate. Fins are attached between the cask body and thin steel plate. (2) The impact energy was absorbed mainly by the inelastic deformation of the neutron absorber and thin steel plate. 2. Test methods; Electric heaters were put into the package to reproduce the real cask conditions. Strains and accelerations due to the drop were measured at the drop by the strain gauges and accelerometers attached on the cask. 3. Analysis; We use the DYNA-3D and NIKE-2D codes to analyze the drop test. A half symmetrical model was applied to overall analysis to calculate the strains and accelerations at the cask body. The maximum acceleration value obtained by the overall analysis and basket model were used to statistically calculate the strains at the basket. 4. Results; The cask integrity was comfirmed through the strains and the results of He leak test. (author)

  15. Direct diffusion tensor estimation using a model-based method with spatial and parametric constraints.

    Science.gov (United States)

    Zhu, Yanjie; Peng, Xi; Wu, Yin; Wu, Ed X; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2017-02-01

    To develop a new model-based method with spatial and parametric constraints (MB-SPC) aimed at accelerating diffusion tensor imaging (DTI) by directly estimating the diffusion tensor from highly undersampled k-space data. The MB-SPC method effectively incorporates the prior information on the joint sparsity of different diffusion-weighted images using an L1-L2 norm and the smoothness of the diffusion tensor using a total variation seminorm. The undersampled k-space datasets were obtained from fully sampled DTI datasets of a simulated phantom and an ex-vivo experimental rat heart with acceleration factors ranging from 2 to 4. The diffusion tensor was directly reconstructed by solving a minimization problem with a nonlinear conjugate gradient descent algorithm. The reconstruction performance was quantitatively assessed using the normalized root mean square error (nRMSE) of the DTI indices. The MB-SPC method achieves acceptable DTI measures at an acceleration factor up to 4. Experimental results demonstrate that the proposed method can estimate the diffusion tensor more accurately than most existing methods operating at higher net acceleration factors. The proposed method can significantly reduce artifact, particularly at higher acceleration factors or lower SNRs. This method can easily be adapted to MR relaxometry parameter mapping and is thus useful in the characterization of biological tissue such as nerves, muscle, and heart tissue. © 2016 American Association of Physicists in Medicine.

  16. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  18. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  19. Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework

    Science.gov (United States)

    Cui, Yaoyuan; Mousas, Christos

    2018-03-01

    This paper presents Master of Puppets (MOP), an animation-by-demonstration framework that allows users to control the motion of virtual characters (puppets) in real time. In the first step, the user is asked to perform the necessary actions that correspond to the character's motions. The user's actions are recorded, and a hidden Markov model is used to learn the temporal profile of the actions. During the runtime of the framework, the user controls the motions of the virtual character based on the specified activities. The advantage of the MOP framework is that it recognizes and follows the progress of the user's actions in real time. Based on the forward algorithm, the method predicts the evolution of the user's actions, which corresponds to the evolution of the character's motion. This method treats characters as puppets that can perform only one motion at a time. This means that combinations of motion segments (motion synthesis), as well as the interpolation of individual motion sequences, are not provided as functionalities. By implementing the framework and presenting several computer puppetry scenarios, its efficiency and flexibility in animating virtual characters is demonstrated.

  20. A combined method to estimate parameters of the thalamocortical model from a heavily noise-corrupted time series of action potential

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ruofan; Wang, Jiang; Deng, Bin, E-mail: dengbin@tju.edu.cn; Liu, Chen; Wei, Xile [Department of Electrical and Automation Engineering, Tianjin University, Tianjin (China); Tsang, K. M.; Chan, W. L. [Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon (Hong Kong)

    2014-03-15

    A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.

  1. A combined method to estimate parameters of the thalamocortical model from a heavily noise-corrupted time series of action potential

    International Nuclear Information System (INIS)

    Wang, Ruofan; Wang, Jiang; Deng, Bin; Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.

    2014-01-01

    A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease

  2. A combined method to estimate parameters of the thalamocortical model from a heavily noise-corrupted time series of action potential

    Science.gov (United States)

    Wang, Ruofan; Wang, Jiang; Deng, Bin; Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.

    2014-03-01

    A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.

  3. Demonstrating the capability and reliability of NDT inspections

    International Nuclear Information System (INIS)

    Wooldridge, A.B.

    1996-01-01

    This paper discusses some recent developments in demonstrating the capability of ultrasonics, eddy currents and radiography both theoretically and in practice, and indicates where further evidence is desirable. Magnox Electric has been involved with development of theoretical models for all three of these inspection methods. Feedback from experience on plant is also important to avoid overlooking any practical limitations of the inspections, and to ensure that the metallurgical characteristics of potential defects have been properly taken into account when designing and qualifying the inspections. For critical applications, inspection techniques are often supported by a Technical Justification which draws on all the relevant theoretical and experimental evidence, as well as experience of inspections on plant. The role of technical justifications is discussed in the context of inspection qualification. (author)

  4. Frictionless Demonstration Using Fine Plastic Beads For Teaching Mechanics

    International Nuclear Information System (INIS)

    Ishii, K.; Kagawa, K.; Khumaeni, A.; Kurniawan, K. H.

    2010-01-01

    New equipment for demonstrating laws of mechanics have successfully been constructed utilizing fine sphere plastic beads (0.3 mm in diameter). Fine plastic beads function as ball bearings to reduce the friction between the object and the plate surface. By this method, a quantitative measurement of energy conservation law has successfully been carried out with a small error of less 3%. The strong advantage of this frictionless method is that we can always use the same objects like Petri dishes for demonstrating many kinds of mechanics laws, such as the first, second, and the third laws of motion, momentum conservation law, and energy conservation law. This demonstration method surely has a beneficial effect for students, who can then understand mechanics laws systematically with a unified concept and no confusion.

  5. Uranium soils integrated demonstration, 1993 status

    International Nuclear Information System (INIS)

    Nuhfer, K.

    1994-01-01

    The Fernald Environmental Management Project (FEMP), operated by the Fernald Environmental Restoration Management Corporation (FERMCO) for the DOE, was selected as the host site for the Uranium Soils Integrated Demonstration. The Uranium Soils ID was established to develop and demonstrate innovative remediation methods which address the cradle to grave elements involved in the remediation of soils contaminated with radionuclides, principally uranium. The participants in the ID are from FERMCO as well as over 15 other organizations from DOE, private industry and universities. Some of the organizations are technology providers while others are members of the technical support groups which were formed to provide technical reviews, recommendations and labor. The following six Technical Support Groups (TSGs) were formed to focus on the objective of the ID: Characterization, Excavation, Decontamination, Waste Treatment/Disposal, Regulatory, and Performance Assessment. This paper will discuss the technical achievements made to date in the program as well as the future program plans. The focus will be on the realtime analysis devices being developed and demonstrated, the approach used to characterize the physical/chemical properties of the uranium waste form in the soil and lab scale studies on methods to remove the uranium from the soil

  6. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  7. Quantitative aspects of the cytochemical demonstration of glucose-6-phosphate dehydrogenase with tetrazolium salts studied in a model system of polyacrylamide films

    NARCIS (Netherlands)

    van Noorden, C. J.; Tas, J.; Sanders, J. A.

    1981-01-01

    The enzyme cytochemical demonstration of glucose-6-phosphate dehydrogenase (G6PDH) with several tetrazolium salts has been studied with an artificial model of polyacrylamide films in corporated with the enzyme, which enabled teh correlation of cytochemical and biochemical data. In the model films no

  8. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series.

    Science.gov (United States)

    Patel, Ameera X; Kundu, Prantik; Rubinov, Mikail; Jones, P Simon; Vértes, Petra E; Ersche, Karen D; Suckling, John; Bullmore, Edward T

    2014-07-15

    The impact of in-scanner head movement on functional magnetic resonance imaging (fMRI) signals has long been established as undesirable. These effects have been traditionally corrected by methods such as linear regression of head movement parameters. However, a number of recent independent studies have demonstrated that these techniques are insufficient to remove motion confounds, and that even small movements can spuriously bias estimates of functional connectivity. Here we propose a new data-driven, spatially-adaptive, wavelet-based method for identifying, modeling, and removing non-stationary events in fMRI time series, caused by head movement, without the need for data scrubbing. This method involves the addition of just one extra step, the Wavelet Despike, in standard pre-processing pipelines. With this method, we demonstrate robust removal of a range of different motion artifacts and motion-related biases including distance-dependent connectivity artifacts, at a group and single-subject level, using a range of previously published and new diagnostic measures. The Wavelet Despike is able to accommodate the substantial spatial and temporal heterogeneity of motion artifacts and can consequently remove a range of high and low frequency artifacts from fMRI time series, that may be linearly or non-linearly related to physical movements. Our methods are demonstrated by the analysis of three cohorts of resting-state fMRI data, including two high-motion datasets: a previously published dataset on children (N=22) and a new dataset on adults with stimulant drug dependence (N=40). We conclude that there is a real risk of motion-related bias in connectivity analysis of fMRI data, but that this risk is generally manageable, by effective time series denoising strategies designed to attenuate synchronized signal transients induced by abrupt head movements. The Wavelet Despiking software described in this article is freely available for download at www

  9. Flambeau River Biofuels Demonstration Plant

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Robert J. [Flambeau River Biofuels, Inc., Park Falls, WI (United States)

    2012-07-30

    Flambeau River BioFuels, Inc. (FRB) proposed to construct a demonstration biomass-to-liquids (BTL) biorefinery in Park Falls, Wisconsin. The biorefinery was to be co-located at the existing pulp and paper mill, Flambeau River Papers, and when in full operation would both generate renewable energy – making Flambeau River Papers the first pulp and paper mill in North America to be nearly fossil fuel free – and produce liquid fuels from abundant and renewable lignocellulosic biomass. The biorefinery would serve to validate the thermochemical pathway and economic models for BTL production using forest residuals and wood waste, providing a basis for proliferating BTL conversion technologies throughout the United States. It was a project goal to create a compelling new business model for the pulp and paper industry, and support the nation’s goal for increasing renewable fuels production and reducing its dependence on foreign oil. FRB planned to replicate this facility at other paper mills after this first demonstration scale plant was operational and had proven technical and economic feasibility.

  10. Design of demand side response model in energy internet demonstration park

    Science.gov (United States)

    Zhang, Q.; Liu, D. N.

    2017-08-01

    The implementation of demand side response can bring a lot of benefits to the power system, users and society, but there are still many problems in the actual operation. Firstly, this paper analyses the current situation and problems of demand side response. On this basis, this paper analyses the advantages of implementing demand side response in the energy Internet demonstration park. Finally, the paper designs three kinds of feasible demand side response modes in the energy Internet demonstration park.

  11. An Application of Taylor Models to the Nakao Method on ODEs

    OpenAIRE

    Yamamoto, Nobito; Komori, Takashi

    2009-01-01

    The authors give short survey on validated computaion of initial value problems for ODEs especially Taylor model methods. Then they propose an application of Taylor models to the Nakao method which has been developed for numerical verification methods on PDEs and apply it to initial value problems for ODEs with some numerical experiments.

  12. Early Prevention Method for Power System Instability

    DEFF Research Database (Denmark)

    Dmitrova, Evgenia; Wittrock, Martin Lindholm; Jóhannsson, Hjörtur

    2015-01-01

    are then determined, using a grid transformation coefficient (GTC) and a numerical, iterative solution to an equation system. The stability criteriacan then be assessed to evaluate the sufficiency of a suggested counter measure. The method is demonstrated on a synthetic 8-bus network and a 464-bus model...... of the Western Denm ark transmission grid. The method successfully demonstrates its ability to efficiently identify and evaluate counter measures for a large, practical system....

  13. Laboratory Demonstration of Low-Cost Method for Producing Thin Film on Nonconductors.

    Science.gov (United States)

    Ebong, A. U.; And Others

    1991-01-01

    A low-cost procedure for metallizing a silicon p-n junction diode by electroless nickel plating is reported. The procedure demonstrates that expensive salts can be excluded without affecting the results. The experimental procedure, measurement, results, and discussion are included. (Author/KR)

  14. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    Science.gov (United States)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  15. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  16. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  17. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  18. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  19. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  20. A Nationwide Survey of Patient Centered Medical Home Demonstration Projects

    Science.gov (United States)

    Bitton, Asaf; Martin, Carina

    2010-01-01

    Background The patient centered medical home has received considerable attention as a potential way to improve primary care quality and limit cost growth. Little information exists that systematically compares PCMH pilot projects across the country. Design Cross-sectional key-informant interviews. Participants Leaders from existing PCMH demonstration projects with external payment reform. Measurements We used a semi-structured interview tool with the following domains: project history, organization and participants, practice requirements and selection process, medical home recognition, payment structure, practice transformation, and evaluation design. Results A total of 26 demonstrations in 18 states were interviewed. Current demonstrations include over 14,000 physicians caring for nearly 5 million patients. A majority of demonstrations are single payer, and most utilize a three component payment model (traditional fee for service, per person per month fixed payments, and bonus performance payments). The median incremental revenue per physician per year was $22,834 (range $720 to $91,146). Two major practice transformation models were identified—consultative and implementation of the chronic care model. A majority of demonstrations did not have well-developed evaluation plans. Conclusion Current PCMH demonstration projects with external payment reform include large numbers of patients and physicians as well as a wide spectrum of implementation models. Key questions exist around the adequacy of current payment mechanisms and evaluation plans as public and policy interest in the PCMH model grows. Electronic supplementary material The online version of this article (doi:10.1007/s11606-010-1262-8) contains supplementary material, which is available to authorized users. PMID:20467907