WorldWideScience

Sample records for modeling methodology starts

  1. Agile methodology selection criteria: IT start-up case study

    Science.gov (United States)

    Micic, Lj

    2017-05-01

    Project management in modern IT companies is often based on agile methodologies which have several advantages compared to traditional methodologies such is waterfall. Having in mind that clients sometimes change project during development it is crucial for an IT company to choose carefully which methodology is going to implement and is it going to be mostly based on one or is it going got be combination of several. There are several modern and often used methodologies but among those Scrum, Kanban and XP programming are usually the most common. Sometimes companies use mostly tools and procedures from one but quite often they use some of the combination of those methodologies. Having in mind that those methodologies are just a framework they allow companies to adapt it for their specific projects as well as for other limitations. These methodologies are in limited usage Bosnia but more and more IT companies are starting to use agile methodologies because it is practice and common not just for their clients abroad but also starting to be the only option in order to deliver quality product on time. However it is always challenging which methodology or combination of several companies should implement and how to connect it to its own project, organizational framework and HR management. This paper presents one case study based on local IT start up and delivers solution based on theoretical framework and practical limitations that case company has.

  2. Archetype modeling methodology.

    Science.gov (United States)

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Design Methodology of Camshaft Driven Charge Valves for Pneumatic Engine Starts

    Directory of Open Access Journals (Sweden)

    Moser Michael M.

    2015-01-01

    Full Text Available Idling losses constitute a significant amount of the fuel consumption of internal combustion engines. Therefore, shutting down the engine during idling phases can improve its overall efficiency. For driver acceptance a fast restart of the engine must be guaranteed. A fast engine start can be performed using a powerful electric starter and an appropriate battery which are found in hybrid electric vehicles, for example. However, these devices involve additional cost and weight. An alternative method is to use a tank with pressurized air that can be injected directly into the cylinders to start the engine pneumatically. In this paper, pneumatic engine starts using camshaft driven charge valves are discussed. A general methodology for an air-optimal charge valve design is presented which can deal with various requirements. The proposed design methodology is based on a process model representing pneumatic engine operation. A design example for a two-cylinder engine is shown, and the resulting optimized pneumatic start is experimentally verified on a test bench engine. The engine’s idling speed of 1200 rpm can be reached within 350 ms for an initial pressure in the air tank of 10 bar. A detailed system analysis highlights the characteristics of the optimal design found.

  4. Early Start DENVER Model: A Meta - analysis

    Directory of Open Access Journals (Sweden)

    Jane P. Canoy

    2015-11-01

    Full Text Available Each child with Autism Spectrum Disorder has different symptoms, skills and types of impairment or disorder with other children. This is why the word “spectrum” is included in this disorder. Eapen, Crncec, and Walter, 2013 claimed that there was an emerging evidence that early interventions gives the greatest capacity of child’s development during their first years of life as “brain plasticity” are high during this period. With this, the only intervention program model for children as young as 18 months that has been validated in a randomized clinical trial is “Early Start Denver Model” (ESDM. This study aimed to determine the effectiveness of the outcome of “Early Start Denver Model” (ESDM towards young children with Autism Spectrum Disorders. This study made use of meta-analysis method. In this study, the researcher utilized studies related to “Early Start Denver Model (ESDM” which is published in a refereed journal which are all available online. There were five studies included which totals 149 children exposed to ESDM. To examine the “pooled effects” of ESDM in a variety of outcomes, a meta-analytic procedure was performed after the extraction of data of the concrete outcomes. Comprehensive Meta Analysis Version 3.3.070 was used to analyze the data.  The effectiveness of the outcome of “Early Start Denver Model” towards young children with Autism Spectrum Disorders (ASD highly depends on the intensity of intervention and the younger child age. This study would provide the basis in effectively implementing an early intervention to children with autism such as the “Early Start Denver Model” (ESDM that would show great outcome effects to those children that has “Autism Spectrum Disorder”.

  5. BIOMECHANICAL MODEL OF THE SPRINT START

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2007-05-01

    Full Text Available The study analysed and identifi ed the major kinematic parameters of the phases of sprint start and block acceleration that infl uence the results of sprint running. The biomechanical measurements and kinematic analysis were performed on the best world’s best sprinters during his preparation for the European Athletics Championship in Geteborg 2006. In this competition Matic Osovnikar won the bronze medal in a 100- metre run set the Slovenian national record with 10.14 s. The kinematic parameters of the sprint start were established on the basis of a 2-D kinematic analysis, using a high-speed camera with a frequency of 200 F/s. The measurements of block acceleration were made by means of the OPTO TRACK technology and an infra-red photo cell system. The athlete performed fi ve, 20m low-start sprints in constant and controlled measurement conditions. The subject of the study was the set position from the point of view of the height of the total body centre of gravity (TBCG, the block time at the front and rear blocks, block velocity, the block face angle, the velocity of the TBCG in the fi rst three metres and the kinematic parameters of block acceleration in the fi rst ten steps. The study showed the following were the key performance factors in the two phases of sprint running: medium start block distance, block velocity, low block face angles, fi rst step length, low vertical rise in the TBCG in the fi rst three metres of block acceleration, contact phase/fl ight phase index in the fi rst ten steps and the optimal ratio between the length and frequency of steps.

  6. Getting Started with Topic Modeling and MALLET

    Directory of Open Access Journals (Sweden)

    Shawn Graham

    2012-09-01

    Full Text Available In this lesson you will first learn what topic modeling is and why you might want to employ it in your research. You will then learn how to install and work with the MALLET natural language processing toolkit to do so. MALLET involves modifying an environment variable (essentially, setting up a short-cut so that your computer always knows where to find the MALLET program and working with the command line (ie, by typing in commands manually, rather than clicking on icons or menus. We will run the topic modeller on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually.

  7. Modeling Cold Start in a Polymer-Electrolyte Fuel Cell

    Science.gov (United States)

    Balliet, Ryan James

    Polymer-electrolyte fuel cells (PEFCs) are electrochemical devices that create electricity by consuming hydrogen and oxygen, forming water and heat as byproducts. PEFCs have been proposed for use in applications that may require start-up in environments with temperatures below 0 degrees C. Doing so requires that the cell heat up, and when its own waste heat is used to do so, the process is referred to here as "cold start.'' However, at low temperatures the cell's product water freezes, and if the temperature does not rise fast enough, the accumulation of ice in the cathode catalyst layer (cCL) can reduce cell performance significantly, extending the time required to heat up. In addition to reducing performance during cold start, under some conditions the accumulation of ice can lead to irreversible structural degradation of the cCL. The objective of this dissertation is to construct and verify a cold-start model for a single PEFC, use it to improve understanding of cold-start behavior, and to demonstrate how this understanding can lead to better start protocols and material properties. The macrohomogeneous model that has been developed to meet the objective is two-dimensional, transient, and nonisothermal. A key differentiating feature is the inclusion of water in all four of the possible phases: ice, liquid, gas, and membrane. In order to predict water content in the ice, liquid, and gas phases that are present in the porous media, the thermodynamics of phase equilibrium are revisited, and a method for relating phase pressures to water content in each of these phases is developed. Verification of the model is performed by comparing model predictions for cell behavior during parametric studies to measured values taken from various sources. In most cases, good agreement is observed between the model and the experiments. Results from the simulations are used to explain the trends that are observed. The verified cold-start model is deployed to determine a cold-start

  8. Strategic Management: Business Model Canvas for Start-Up Company

    OpenAIRE

    Sonninen, Anna

    2016-01-01

    Thesis is written about development of Kakunpala as a Start-Up Company, because development processes and ideas are the key factors for future success of the company. The aim of this thesis is to understand Strategic Management of the Company and Development processes,by using the SWOT analysis before and after development and by applying Business Model Canvas template to the Start-Up Company, and by describing Customer Journey: what relations Kakunpala has with clients and its processes...

  9. University Start-ups: A Better Business Model

    Science.gov (United States)

    Dehn, J.; Webley, P. W.

    2015-12-01

    Many universities look to start-up companies as a way to attract faculty, supporting research and students as traditional federal sources become harder to come by. University affiliated start-up companies can apply for a broader suite of grants, as well as market their services to a broad customer base. Often university administrators see this as a potential panacea, but national statistics show this is not the case. Rarely do universities profit significantly from their start-ups. With a success rates of around 20%, most start-ups end up costing the university money as well as faculty-time. For the faculty, assuming they want to continue in academia, a start-up is often unattractive because it commonly leads out of academia. Running a successful business as well as maintaining a strong teaching and research load is almost impossible to do at the same time. Most business models and business professionals work outside of academia, and the models taught in business schools do not merge well in a university environment. To mitigate this a new business model is proposed where university start-ups are aligned with the academic and research missions of the university. A university start-up must work within the university, directly support research and students, and the work done maintaining the business be recognized as part of the faculty member's university obligations. This requires a complex conflict of interest management plan and for the companies to be non-profit in order to not jeopardize the university's status. This approach may not work well for all universities, but would be ideal for many to conserve resources and ensure a harmonious relationship with their start-ups and faculty.

  10. Models to predict the start of the airborne pollen season

    Science.gov (United States)

    Siniscalco, Consolata; Caramiello, Rosanna; Migliavacca, Mirco; Busetto, Lorenzo; Mercalli, Luca; Colombo, Roberto; Richardson, Andrew D.

    2015-07-01

    Aerobiological data can be used as indirect but reliable measures of flowering phenology to analyze the response of plant species to ongoing climate changes. The aims of this study are to evaluate the performance of several phenological models for predicting the pollen start of season (PSS) in seven spring-flowering trees ( Alnus glutinosa, Acer negundo, Carpinus betulus, Platanus occidentalis, Juglans nigra, Alnus viridis, and Castanea sativa) and in two summer-flowering herbaceous species ( Artemisia vulgaris and Ambrosia artemisiifolia) by using a 26-year aerobiological data set collected in Turin (Northern Italy). Data showed a reduced interannual variability of the PSS in the summer-flowering species compared to the spring-flowering ones. Spring warming models with photoperiod limitation performed best for the greater majority of the studied species, while chilling class models were selected only for the early spring flowering species. For Ambrosia and Artemisia, spring warming models were also selected as the best models, indicating that temperature sums are positively related to flowering. However, the poor variance explained by the models suggests that further analyses have to be carried out in order to develop better models for predicting the PSS in these two species. Modeling the pollen season start on a very wide data set provided a new opportunity to highlight the limits of models in elucidating the environmental factors driving the pollen season start when some factors are always fulfilled, as chilling or photoperiod or when the variance is very poor and is not explained by the models.

  11. Application of an innovative methodology to improve the starting-up of UASB reactors treating domestic sewage.

    Science.gov (United States)

    Rodríguez, J A; Peña, M R; Manzi, V

    2001-01-01

    This study shows the results obtained during the starting-up evaluation of an UASB reactor treating domestic sewage. It is located in the municipality of Ginebra, Valle del Cauca region in Colombia. Its design flow is 7.5 l/s with a maximum capacity of 10 l/s. The reactor was seeded with a deficient quality inoculum which accounted for 20% of the total reactor volume. The starting-up methodology comprised the sequential washing of the sludge (inoculum) by applying three different upflow velocities. This procedure resembles what other authors term the "selective pressure method". Once the sludge was washed, the reactor was started-up with an initial hydraulic retention time (HRT) of 24.9 hours, which was steadily reduced down to 6.7 hours in the final stage. Along the starting-up phase, there was a positive evolution in terms of quantity, quality and spatial distribution of the sludge. Consequently, there was a positive evolution in organic matter removal mechanisms. For HRT above 14 hours, the removal mechanisms were mainly physical whilst for HRT below 9 hours the removal mechanisms were mostly biological. Based on the above considerations and on the water quality parameters measured, it may be concluded that the start-up of an UASB reactor for domestic sewage treatment seeded with a low quality inoculum can be done with HRT as low as 15 or 12 hours. In this way, it is possible to reduce the starting-up period of these reactors down to 4 to 6 weeks, provided that the starting-up methodology is properly applied.

  12. Model based analysis of the time scales associated to pump start-ups

    Energy Technology Data Exchange (ETDEWEB)

    Dazin, Antoine, E-mail: antoine.dazin@lille.ensam.fr [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Caignaert, Guy [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Dauphin-Tanguy, Geneviève, E-mail: genevieve.dauphin-tanguy@ec-lille.fr [Univ Lille Nord de France, Ecole Centrale de Lille/CRISTAL UMR CNRS 9189, BP 48, 59651, Villeneuve d’Ascq cedex F 59000 (France)

    2015-11-15

    Highlights: • A dynamic model of a hydraulic system has been built. • Three periods in a pump start-up have been identified. • The time scales of each period have been estimated. • The parameters affecting the rapidity of a pump start-up have been explored. - Abstract: The paper refers to a non dimensional analysis of the behaviour of a hydraulic system during pump fast start-ups. The system is composed of a radial flow pump and its suction and delivery pipes. It is modelled using the bond graph methodology. The prediction of the model is validated by comparison to experimental results. An analysis of the time evolution of the terms acting on the total pump pressure is proposed. It allows for a decomposition of the start-up into three consecutive periods. The time scales associated with these periods are estimated. The effects of parameters (angular acceleration, final rotation speed, pipe length and resistance) affecting the start-up rapidity are then explored.

  13. Methodology for the treatment of model uncertainty

    Science.gov (United States)

    Droguett, Enrique Lopez

    The development of a conceptual, unified, framework and methodology for treating model and parameter uncertainties is the subject of this work. Firstly, a discussion on the philosophical grounds of notions such as reality, modeling, models, and their relation is presented. On this, a characterization of the modeling process is presented. The concept of uncertainty, addressing controversial topics such as type and sources of uncertainty, are investigated arguing that uncertainty is fundamentally a characterization of lack of knowledge and as such all uncertainty are of the same type. A discussion about the roles of a model structure and model parameters is presented, in which it is argued that a distinction is for convenience and a function of the stage in the modeling process. From the foregoing discussion, a Bayesian framework for an integrated assessment of model and parameter uncertainties is developed. The methodology has as its central point the treatment of model as source of information regarding the unknown of interest. It allows for the assessment of the model characteristics affecting its performance, such as bias and precision. It also permits the assessment of possible dependencies among multiple models. Furthermore, the proposed framework makes possible the use of not only information from models (e.g., point estimates, qualitative assessments), but also evidence about the models themselves (performance data, confidence in the model, applicability of the model). The methodology is then applied in the context of fire risk models where several examples with real data are studied. These examples demonstrate how the framework and specific techniques developed in this study can address cases involving multiple models, use of performance data to update the predictive capabilities of a model, and the case where a model is applied in a context other than one for which it is designed.

  14. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  15. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  16. CFD methodology of a model quadrotor

    Science.gov (United States)

    Sunan, Burak

    2013-11-01

    This paper presents an analysis of the aerodynamics characteristics of a quadrotor for both steady and unsteady flows. For steady flow cases, aerodynamics behaviour can be defined readily for any aerial vehicles in wind tunnels. However, unsteady flow conditions in wind tunnels make experimental aerodynamics characterizations difficult. This article describes determination of lift, drag and thrust forces on a model quadrotor by using CFD (Computational Fluid Dynamics) software ANSYS Fluent. A significant issue is to find a new CFD methodology for comparison with the experimental results. After getting sufficiently close agreement with some benchmarking experiments, the CFD methodology can be performed for more complicated geometries. In this paper, propeller performance database experiments from Ref. 1 will be used for validation of the CFD procedure. The results of the study reveals the dynamics characteristics of a quadrotor. This demonstrates feasibility of designing a quadrotor by CFD which saves time and cost compared to experiments.

  17. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  18. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  19. Modeling scientific: some theoretical and methodological considerations

    Directory of Open Access Journals (Sweden)

    Carlos Tamayo-Roca

    2017-04-01

    Full Text Available At present widespread use of models as an auxiliary system to penetrate the essence of phenomena related to all areas of cognitive and transforming activity of man, covering as diverse as human sciences fields. In the field of education use it is becoming more common as essential to transform school practice and enrich their theoretical instrument bitter day. The paper deals with the development of theoretical modeling as a scientific method to advance the process to be transformed and characterized by establishing relationships and links between the structural components that comprise it. In this regard it is proposed as an objective socialize some theoretical and methodological considerations that favor the use of modeling method in the scientific research activity of teachers.

  20. Modeling Cold Start in a Polymer-Electrolyte Fuel Cell

    OpenAIRE

    Balliet, Ryan

    2010-01-01

    Polymer-electrolyte fuel cells (PEFCs) are electrochemical devices that create electricity by consuming hydrogen and oxygen, forming water and heat as byproducts. PEFCs have been proposed for use in applications that may require start-up in environments with temperatures below 0 degrees C. Doing so requires that the cell heat up, and when its own waste heat is used to do so, the process is referred to here as ``cold start.'' However, at low temperatures the cell's product water freezes, and i...

  1. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  2. A methodology for acquiring qualitative knowledge for probabilistic graphical models

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders L.

    2004-01-01

    We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model developm......We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model...

  3. Modelling and Predicting Backstroke Start Performance Using Non-Linear And Linear Models

    Directory of Open Access Journals (Sweden)

    de Jesus Karla

    2018-03-01

    Full Text Available Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%. Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19% and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30% using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s and vertical handgrip (0.01 vs. 0.03 s. Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  4. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  5. A methodology for modeling regional terrorism risk.

    Science.gov (United States)

    Chatterjee, Samrat; Abkowitz, Mark D

    2011-07-01

    Over the past decade, terrorism risk has become a prominent consideration in protecting the well-being of individuals and organizations. More recently, there has been interest in not only quantifying terrorism risk, but also placing it in the context of an all-hazards environment in which consideration is given to accidents and natural hazards, as well as intentional acts. This article discusses the development of a regional terrorism risk assessment model designed for this purpose. The approach taken is to model terrorism risk as a dependent variable, expressed in expected annual monetary terms, as a function of attributes of population concentration and critical infrastructure. This allows for an assessment of regional terrorism risk in and of itself, as well as in relation to man-made accident and natural hazard risks, so that mitigation resources can be allocated in an effective manner. The adopted methodology incorporates elements of two terrorism risk modeling approaches (event-based models and risk indicators), producing results that can be utilized at various jurisdictional levels. The validity, strengths, and limitations of the model are discussed in the context of a case study application within the United States. © 2011 Society for Risk Analysis.

  6. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  7. Threat model framework and methodology for personal networks (PNs)

    DEFF Research Database (Denmark)

    Prasad, Neeli R.

    2007-01-01

    To be able to build a secure network, it is essential to model the threats to the network. A methodology for building a threat model has been proposed in the paper. Several existing threat models and methodologies will be compared to the proposed methodology. The aim of the proposed methodology i...... been used. Also risk assessment methods will be discussed. Threat profiles and vulnerability profiles have been presented....

  8. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  9. A methodology for spectral wave model evaluation

    Science.gov (United States)

    Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.

    2017-12-01

    climate, omitting the energy in the frequency band between the two lower limits tested can lead to an incomplete characterization of model performance. This methodology was developed to aid in selecting a comparison frequency range that does not needlessly increase computational expense and does not exclude energy to the detriment of model performance analysis.

  10. Does Head Start differentially benefit children with risks targeted by the program's service model?

    Science.gov (United States)

    Miller, Elizabeth B; Farkas, George; Duncan, Greg J

    Data from the Head Start Impact Study ( N = 3540) were used to test for differential benefits of Head Start after one program year and after kindergarten on pre-academic and behavior outcomes for children at risk in the domains targeted by the program's comprehensive services. Although random assignment to Head Start produced positive treatment main effects on children's pre-academic skills and behavior problems, residualized growth models showed that random assignment to Head Start did not differentially benefit the pre-academic skills of children with risk factors targeted by the Head Start service model. The models showed detrimental impacts of Head Start for maternal-reported behavior problems of high-risk children, but slightly more positive impacts for teacher-reported behavior. Policy implications for Head Start are discussed.

  11. Modeling, methodologies and tools for molecular and nano-scale communications modeling, methodologies and tools

    CERN Document Server

    Nakano, Tadashi; Moore, Michael

    2017-01-01

    (Preliminary) The book presents the state of art in the emerging field of molecular and nanoscale communication. It gives special attention to fundamental models, and advanced methodologies and tools used in the field. It covers a wide range of applications, e.g. nanomedicine, nanorobot communication, bioremediation and environmental managements. It addresses advanced graduate students, academics and professionals working at the forefront in their fields and at the interfaces between different areas of research, such as engineering, computer science, biology and nanotechnology.

  12. Getting Started and Working with Building Information Modeling

    Science.gov (United States)

    Smith, Dana K.

    2009-01-01

    This article will assume that one has heard of Building Information Modeling or BIM but has not developed a strategy as to how to get the most out of it. The National BIM Standard (NBIMS) has defined BIM as a digital representation of physical and functional characteristics of a facility. As such, it serves as a shared knowledge resource for…

  13. Methodology for Modeling and Analysis of Business Processes (MMABP

    Directory of Open Access Journals (Sweden)

    Vaclav Repa

    2015-10-01

    Full Text Available This paper introduces the methodology for modeling business processes. Creation of the methodology is described in terms of the Design Science Method. Firstly, the gap in contemporary Business Process Modeling approaches is identified and general modeling principles which can fill the gap are discussed. The way which these principles have been implemented in the main features of created methodology is described. Most critical identified points of the business process modeling are process states, process hierarchy and the granularity of process description. The methodology has been evaluated by use in the real project. Using the examples from this project the main methodology features are explained together with the significant problems which have been met during the project. Concluding from these problems together with the results of the methodology evaluation the needed future development of the methodology is outlined.

  14. Thermal ecological risk assessment - methodology for modeling

    International Nuclear Information System (INIS)

    Markandeya, S.G.

    2007-01-01

    Discharge of hot effluents into natural water bodies is a potential risk to the aquatic life. The stipulations imposed by the MoEF, Government of India for protecting the environment are in place. However, due to lack of quality scientific information, these stipulations are generally conservative in nature and hence questionable. A Coordinated Research Project on Thermal Ecological Studies, successfully completed recently came out with a suggestion of implementing multi-factorially estimated mixing zone concept. In the present paper, risk based assessment methodology is proposed as an alternate approach. The methodology is presented only conceptually and briefly over which further refining may be necessary. The methodology would enable to account for variations in the plant operational conditions, climatic conditions and the geographical and hydraulic characteristic conditions of the water body in a suitable manner. (author)

  15. The Service-Learning methodology applied to Operations Management: From the Operations Plan to business start up.

    Directory of Open Access Journals (Sweden)

    Constantino García-Ramos

    2017-06-01

    After developing this activity of teaching innovation, we can conclude that the SL is a good methodology to improve the academic, personal and social development of students, suggesting that it is possible to join their academic success with the social commitment of the University.

  16. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one-b...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  17. A Monte Carlo methodology for modelling ashfall hazards

    Science.gov (United States)

    Hurst, Tony; Smith, Warwick

    2004-12-01

    We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.

  18. The design of control algorithm for automatic start-up model of HWRR

    International Nuclear Information System (INIS)

    Guo Wenqi

    1990-01-01

    The design of control algorithm for automatic start-up model of HWRR (Heavy Water Research Reactor), the calculation of μ value and the application of digital compensator are described. Finally The flow diagram of the automatic start-up and digital compensator program for HWRR are given

  19. Verification of Fault Tree Models with RBDGG Methodology

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    Currently, fault tree analysis is widely used in the field of probabilistic safety assessment (PSA) of nuclear power plants (NPPs). To guarantee the correctness of fault tree models, which are usually manually constructed by analysts, a review by other analysts is widely used for verifying constructed fault tree models. Recently, an extension of the reliability block diagram was developed, which is named as RBDGG (reliability block diagram with general gates). The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system and, therefore, the modeling of a system for a system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar to that of the development of the RGGG (Reliability Graph with General Gates) methodology. The difference between the RBDGG methodology and RGGG methodology is that the RBDGG methodology focuses on the block failures while the RGGG methodology focuses on the connection line failures. But, it is also known that an RGGG model can be converted to an RBDGG model and vice versa. In this paper, a new method for the verification of the constructed fault tree models using the RBDGG methodology is proposed and demonstrated

  20. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    Science.gov (United States)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  1. A Physics-Based Starting Model for Gas Turbine Engines, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to demonstrate the feasibility of producing an integrated starting model for gas turbine engines using a new physics-based...

  2. Threat model framework and methodology for personal networks (PNs)

    DEFF Research Database (Denmark)

    Prasad, Neeli R.

    2007-01-01

    is to give a structured, convenient approach for building threat models. A framework for the threat model is presented with a list of requirements for methodology. The methodology will be applied to build a threat model for Personal Networks. Practical tools like UML sequence diagrams and attack trees have...... been used. Also risk assessment methods will be discussed. Threat profiles and vulnerability profiles have been presented....

  3. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  4. Cold start-up condition model for heat recovery steam generators

    International Nuclear Information System (INIS)

    Sindareh-Esfahani, Peyman; Habibi-Siyahposh, Ehsan; Saffar-Avval, Majid; Ghaffari, Ali; Bakhtiari-Nejad, Firooz

    2014-01-01

    A dynamic modeling of Heat Recovery Steam Generator (HRSG) during cold start-up operation in Combined Cycle Power Plant (CCPP) is introduced. In order to characterize the essential dynamic behavior of the HRSG during cold start-up; Dynamic equations of all HRSG's components are developed based on energy and mass balances. To describe precisely the operation of HRSG; a method based on nonlinear estimated functions for thermodynamic properties is applied to estimate the model parameters. Model parameters are evaluated by a designed algorithm based on Genetic Algorithm (GA). A wide set of experimental data is used to validate HRSG model during cold start-up operation. The simulation results show the reliability and validity of the developed model for cold start-up operation. - Highlights: •Presenting a mathematical model for HRSGs cold start-up based on energy and mass balances. •A designed parameter identification algorithm based on GA is presented. •Application of experimental data in order to model and validate simulation results

  5. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  6. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....

  7. Model-driven software migration a methodology

    CERN Document Server

    Wagner, Christian

    2014-01-01

    Today, reliable software systems are the basis of any business or company. The continuous further development of those systems is the central component in software evolution. It requires a huge amount of time- man power- as well as financial resources. The challenges are size, seniority and heterogeneity of those software systems. Christian Wagner addresses software evolution: the inherent problems and uncertainties in the process. He presents a model-driven method which leads to a synchronization between source code and design. As a result the model layer will be the central part in further e

  8. Methodology to estimate the threshold in-cylinder temperature for self-ignition of fuel during cold start of Diesel engines

    International Nuclear Information System (INIS)

    Broatch, A.; Ruiz, S.; Margot, X.; Gil, A.

    2010-01-01

    Cold startability of automotive direct injection (DI) Diesel engines is frequently one of the negative features when these are compared to their closest competitor, the gasoline engine. This situation worsens with the current design trends (engine downsizing) and the emerging new Diesel combustion concepts, such as HCCI, PCCI, etc., which require low compression ratio engines. To mitigate this difficulty, pre-heating systems (glow plugs, air heating, etc.) are frequently used and their technologies have been continuously developed. For the optimum design of these systems, the determination of the threshold temperature that the gas should have in the cylinder in order to provoke the self-ignition of the fuel injected during cold starting is crucial. In this paper, a novel methodology for estimating the threshold temperature is presented. In this methodology, experimental and computational procedures are adequately combined to get a good compromise between accuracy and effort. The measurements have been used as input data and boundary conditions in 3D and 0D calculations in order to obtain the thermodynamic conditions of the gas in the cylinder during cold starting. The results obtained from the study of two engine configurations -low and high compression ratio- indicate that the threshold in-cylinder temperature is a single temperature of about 415 o C.

  9. Combining prior knowledge with data driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, PF; Betlem, BHL; Roffel, B

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  10. An Evaluation of the Research Evidence on the Early Start Denver Model

    Science.gov (United States)

    Baril, Erika M.; Humphreys, Betsy P.

    2017-01-01

    The Early Start Denver Model (ESDM) has been gaining popularity as a comprehensive treatment model for children ages 12 to 60 months with autism spectrum disorders (ASD). This article evaluates the research on the ESDM through an analysis of study design and purpose; child participants; setting, intervention agents, and context; density and…

  11. Established Companies meet Start-ups : Case: A service to research developing business models in start-up hot spots around the globe

    OpenAIRE

    Eisenblätter, Simon

    2016-01-01

    This thesis presents a business model with the aim to improve conversation between traditional companies and newly established start-ups. The assumption that such conversation can be facilitated by scouting for innovation in start-up hot spots around the world, will be discussed throughout the paper. The purpose of this work is to lay out a plan which can be presented to a first range of possible customers, with the goal of negotiating a trial cooperation. Before introducing the busi...

  12. Chapter three: methodology of exposure modeling

    CSIR Research Space (South Africa)

    Moschandreas, DJ

    2002-12-01

    Full Text Available not be the most important pathway of exposure for all pollutants, it is considered the one of major concern for exposure to PM. Related concepts, such as dose, will not be addressed in this chapter. The National Academy of Sciences suggests the fol- lowing model... over time. Other exposure expressions are used to estimate exposures to pollutants in the in- gestion and dermal absorption pathways. Major variables of concern in the estimation of ex- posure, Eq. (2), are the concentration of PM and its constituents...

  13. A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)

    Science.gov (United States)

    2017-10-01

    TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY

  14. Implementation of the Early Start Denver Model in an Italian Community

    Science.gov (United States)

    Colombi, Costanza; Narzisi, Antonio; Ruta, Liliana; Cigala, Virginia; Gagliano, Antonella; Pioggia, Giovanni; Siracusano, Rosamaria; Rogers, Sally J.; Muratori, Filippo

    2018-01-01

    Identifying effective, community-based specialized interventions for young children with autism spectrum disorder is an international clinical and research priority. We evaluated the effectiveness of the Early Start Denver Model intervention in a group of young children with autism spectrum disorder living in an Italian community compared to a…

  15. Suitability of Modern Software Development Methodologies for Model Driven Development

    Directory of Open Access Journals (Sweden)

    Ruben Picek

    2009-12-01

    Full Text Available As an answer to today’s growing challenges in software industry, wide spectrum of new approaches of software development has occurred. One prominent direction is currently most promising software development paradigm called Model Driven Development (MDD. Despite a lot of skepticism and problems, MDD paradigm is being used and improved to accomplish many inherent potential benefits. In the methodological approach of software development it is necessary to use some kind of development process. Modern methodologies can be classified into two main categories: formal or heavyweight and agile or lightweight. But when it is a question about MDD and development process for MDD, currently known methodologies are very poor or better said they don't have any explanation of MDD process. As the result of research, in this paper, author examines the possibilities of using existing modern software methodologies in context of MDD paradigm.

  16. Modeling the starting performance of high power solid rotor salient pole synchronous motors

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, R.; Sadowski, N. [GRUCAD/Federal University of Santa Catarina, Florianopolis, SC 88040-970 (Brazil); Grander, L.O. [ELETROSUL Power Stations S.A., Florianopolis, SC 88040-901 (Brazil); Ruencos, F.; Ogawa, C.; Fo, F.J. Doubrawa [WEG Energy, Jaragua do Sul, SC 89256-900 (Brazil)

    2009-12-15

    A computer model, including analytical and FEM formulations, was developed to calculate the starting performance of synchronous motors with solid rotor salient poles. Using quasi-steady state equations, the average and the envelope of the oscillating electromagnetic torque as well as the stator rms current are calculated. With the stator current, the rotor pole losses are evaluated by FEM. The complete simulation process is performed by self-contained software composed by several computational modules properly tiled to simplify the work of a design engineer. The calculated starting performance was compared to experimental results showing satisfactory consistency. (author)

  17. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    Science.gov (United States)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  18. Inverting reflections using full-waveform inversion with inaccurate starting models

    KAUST Repository

    AlTheyab, Abdullah

    2015-08-19

    We present a method for inverting seismic reflections using full-waveform inversion (FWI) with inaccurate starting models. For a layered medium, near-offset reflections (with zero angle of incidence) are unlikely to be cycle-skipped regardless of the low-wavenumber velocity error in the initial models. Therefore, we use them as a starting point for FWI, and the subsurface velocity model is then updated during the FWI iterations using reflection wavepaths from varying offsets that are not cycle-skipped. To enhance low-wavenumber updates and accelerate the convergence, we take several passes through the non-linear Gauss-Seidel iterations, where we invert traces from a narrow range of near offsets and finally end at the far offsets. Every pass is followed by applying smoothing to the cumulative slowness update. The smoothing is strong at the early stages and relaxed at later iterations to allow for a gradual reconstruction of the subsurface model in a multiscale manner. Applications to synthetic and field data, starting from inaccurate models, show significant low-wavenumber updates and flattening of common-image gathers after many iterations.

  19. Methodology of modeling fiber reinforcement in concrete elements

    NARCIS (Netherlands)

    Stroeven, P.

    2010-01-01

    This paper’s focus is on the modeling methodology of (steel) fiber reinforcement in concrete. The orthogonal values of fiber efficiency are presented. Bulk as well as boundary situations are covered. Fiber structure is assumed due to external compaction by vibration to display a partially linear

  20. A stochastic hybrid model for pricing forward-start variance swaps

    Science.gov (United States)

    Roslan, Teh Raihana Nazirah

    2017-11-01

    Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.

  1. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  2. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving.

    Science.gov (United States)

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-10-11

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle's surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture.

  3. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    Directory of Open Access Journals (Sweden)

    Jos Elfring

    2016-10-01

    Full Text Available The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture.

  4. On the Mathematical Modeling of Line-Start Permanent Magnet Synchronous Motors under Static Eccentricity

    Directory of Open Access Journals (Sweden)

    Ibrahem Hussein

    2018-01-01

    Full Text Available Line start permanent magnet synchronous motors experience different types of failures, including static eccentricity. The first step in detecting such failures is the mathematical modeling of the motor under healthy and failed conditions. In this paper, an attempt to develop an accurate mathematical model for this motor under static eccentricity is presented. The model is based on the modified winding function method and coupled magnetic circuits approach. The model parameters are calculated directly from the motor winding layout and its geometry. Static eccentricity effects are considered in the motor inductances calculation. The performance of the line start permanent magnet synchronous motor using the developed mathematical model is investigated using MATLAB/SIMULINK® software (2013b, MathWorks, Natick, MA, USA under healthy and static eccentricity condition for different loading values. A finite element method analysis is conducted to verify the mathematical model results, using the commercial JMAG® software (16.0.02n, JSOL Corporation, Tokyo, Japan. The results show a fine agreement between JMAG® and the developed mathematical model simulation results.

  5. Does Head Start differentially benefit children with risks targeted by the program’s service model?☆

    Science.gov (United States)

    Miller, Elizabeth B.; Farkas, George; Duncan, Greg J.

    2015-01-01

    Data from the Head Start Impact Study (N = 3540) were used to test for differential benefits of Head Start after one program year and after kindergarten on pre-academic and behavior outcomes for children at risk in the domains targeted by the program’s comprehensive services. Although random assignment to Head Start produced positive treatment main effects on children’s pre-academic skills and behavior problems, residualized growth models showed that random assignment to Head Start did not differentially benefit the pre-academic skills of children with risk factors targeted by the Head Start service model. The models showed detrimental impacts of Head Start for maternal-reported behavior problems of high-risk children, but slightly more positive impacts for teacher-reported behavior. Policy implications for Head Start are discussed. PMID:26379369

  6. Organizational information assets classification model and security architecture methodology

    Directory of Open Access Journals (Sweden)

    Mostafa Tamtaji

    2015-12-01

    Full Text Available Today's, Organizations are exposed with huge and diversity of information and information assets that are produced in different systems shuch as KMS, financial and accounting systems, official and industrial automation sysytems and so on and protection of these information is necessary. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released.several benefits of this model cuses that organization has a great trend to implementing Cloud computing. Maintaining and management of information security is the main challenges in developing and accepting of this model. In this paper, at first, according to "design science research methodology" and compatible with "design process at information systems research", a complete categorization of organizational assets, including 355 different types of information assets in 7 groups and 3 level, is presented to managers be able to plan corresponding security controls according to importance of each groups. Then, for directing of organization to architect it’s information security in cloud computing environment, appropriate methodology is presented. Presented cloud computing security architecture , resulted proposed methodology, and presented classification model according to Delphi method and expers comments discussed and verified.

  7. Electro-thermal modelling of polymer lithium batteries for starting period and pulse power

    Energy Technology Data Exchange (ETDEWEB)

    Baudry, P. [Electricite de France DER, Site des Renardieres, Moret-sur-Loing (France); Neri, M. [Electricite de France DER, Site des Renardieres, Moret-sur-Loing (France); Gueguen, M. [Bollore Technologies, Odet, 29 Quimper (France); Lonchampt, G. [CEA/CEREM, CENG-85X, 38 Grenoble (France)

    1995-04-01

    Since power capabilities of solid polymer lithium batteries can only be delivered above 60 C, the thermal management in electric-vehicle applications has to be carefully considered. Electro-thermal modelling of a thermally insulated 200 kg battery was performed, and electrochemical data were obtained from laboratory cell impedance measurements at 20 and 80 C. Starting at 20 C as initial working temperature, the battery reaches 40 C after 150 s of discharge in a 0.5 {Omega} resistance. At 40 C, the useful peak power is 20 kW. The energy expense for heating the battery from 20 to 40 C is 1.4 kWh, corresponding to 6% of the energy available in the battery. After a stand-by period of 24 h, the temperature decreases from 80 to 50 C, allowing efficient starting conditions. (orig.)

  8. Modeling the Financial Distress of Microenterprise StartUps Using Support Vector Machines: A Case Study

    Directory of Open Access Journals (Sweden)

    Antonio Blanco-Oliver

    2014-10-01

    Full Text Available Despite the leading role that micro-entrepreneurship plays in economic development, and the high failure rate of microenterprise start-ups in their early years, very few studies have designed financial distress models to detect the financial problems of micro-entrepreneurs. Moreover, due to a lack of research, nothing is known about whether non-financial information and nonparametric statistical techniques improve the predictive capacity of these models. Therefore, this paper provides an innovative financial distress model specifically designed for microenterprise startups via support vector machines (SVMs that employs financial, non-financial, and macroeconomic variables. Based on a sample of almost 5,500 micro- entrepreneurs from a Peruvian Microfinance Institution (MFI, our findings show that the introduction of non-financial information related to the zone in which the entrepreneurs live and situate their business, the duration of the MFI-entrepreneur relationship, the number of loans granted by the MFI in the last year, the loan destination, and the opinion of experts on the probability that microenterprise start-ups may experience financial problems, significantly increases the accuracy performance of our financial distress model. Furthermore, the results reveal that the models that use SVMs outperform those which employ traditional logistic regression (LR analysis.

  9. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    objective is to attack the boost phase of ballistic missiles using the Airborne Weapons Layer concept (AWL) (Corbett, 2013) and ( Rood , Chilton, Campbell...and analysis techniques used in this research. Chapter 4 provides analysis of the simulation model to illustrate the methodology in Chapter 3 and to... techniques , and procedures. The purpose of our research is to study the use of a new missile system within an air combat environment. Therefore, the

  10. Teaching methodology for modeling reference evapotranspiration with artificial neural networks

    OpenAIRE

    Martí, Pau; Pulido Calvo, Inmaculada; Gutiérrez Estrada, Juan Carlos

    2015-01-01

    [EN] Artificial neural networks are a robust alternative to conventional models for estimating different targets in irrigation engineering, among others, reference evapotranspiration, a key variable for estimating crop water requirements. This paper presents a didactic methodology for introducing students in the application of artificial neural networks for reference evapotranspiration estimation using MatLab c . Apart from learning a specific application of this software wi...

  11. Mixed-mode modelling mixing methodologies for organisational intervention

    CERN Document Server

    Clarke, Steve; Lehaney, Brian

    2001-01-01

    The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human­ centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...

  12. Predictive Dynamic Simulation of Seated Start-Up Cycling Using Olympic Cyclist and Bicycle Models

    Directory of Open Access Journals (Sweden)

    Conor Jansen

    2018-02-01

    Full Text Available Predictive dynamic simulation is a useful tool for analyzing human movement and optimizing performance. Here it is applied to Olympic-level track cycling. A seven degree-of-freedom, two-legged cyclist and bicycle model was developed using MapleSim. GPOPS-II, a direct collocation optimal control software, was used to solve the optimal control problem for the predictive simulation. The model was validated against ergometer pedaling performed by seven Olympic-level track cyclists from the Canadian team. The simulations produce joint angles and cadence/torque/power similar to experimental results. The results indicate optimal control can be used for predictive simulation with a combined cyclist and bicycle model. Future work needed to more accurately model an Olympic cyclist and a standing start is discussed.

  13. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  14. A generalized methodology to characterize composite materials for pyrolysis models

    Science.gov (United States)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to

  15. Generalized equilibrium modeling: the methodology of the SRI-Gulf energy model. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Gazalet, E.G.

    1977-05-01

    The report provides documentation of the generalized equilibrium modeling methodology underlying the SRI-Gulf Energy Model and focuses entirely on the philosophical, mathematical, and computational aspects of the methodology. The model is a highly detailed regional and dynamic model of the supply and demand for energy in the US. The introduction emphasized the need to focus modeling efforts on decisions and the coordinated decomposition of complex decision problems using iterative methods. The conceptual framework is followed by a description of the structure of the current SRI-Gulf model and a detailed development of the process relations that comprise the model. The network iteration algorithm used to compute a solution to the model is described and the overall methodology is compared with other modeling methodologies. 26 references.

  16. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  17. A business process model as a starting point for tight cooperation among organizations

    Directory of Open Access Journals (Sweden)

    O. Mysliveček

    2006-01-01

    Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success. 

  18. A methodology for overall consequence modeling in chemical industry

    International Nuclear Information System (INIS)

    Arunraj, N.S.; Maiti, J.

    2009-01-01

    Risk assessment in chemical process industry is a very important issue for safeguarding human and the ecosystem from damages caused to them. Consequence assessment is an integral part of risk assessment. However, the commonly used consequence estimation methods involve time-consuming complex mathematical models and simple assimilation of losses without considering all the consequence factors. This lead to the deterioration of quality of estimated risk value. So, the consequence modeling has to be performed in detail considering all major losses with optimal time to improve the decisive value of risk. The losses can be broadly categorized into production loss, assets loss, human health and safety loss, and environment loss. In this paper, a conceptual framework is developed to assess the overall consequence considering all the important components of major losses. Secondly, a methodology is developed for the calculation of all the major losses, which are normalized to yield the overall consequence. Finally, as an illustration, the proposed methodology is applied to a case study plant involving benzene extraction. The case study result using the proposed consequence assessment scheme is compared with that from the existing methodologies.

  19. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  20. Methodologies in the modeling of combined chemo-radiation treatments

    Science.gov (United States)

    Grassberger, C.; Paganetti, H.

    2016-11-01

    The variety of treatment options for cancer patients has increased significantly in recent years. Not only do we combine radiation with surgery and chemotherapy, new therapeutic approaches such as immunotherapy and targeted therapies are starting to play a bigger role. Physics has made significant contributions to radiation therapy treatment planning and delivery. In particular, treatment plan optimization using inverse planning techniques has improved dose conformity considerably. Furthermore, medical physics is often the driving force behind tumor control and normal tissue complication modeling. While treatment optimization and outcome modeling does focus mainly on the effects of radiation, treatment modalities such as chemotherapy are treated independently or are even neglected entirely. This review summarizes the published efforts to model combined modality treatments combining radiation and chemotherapy. These models will play an increasing role in optimizing cancer therapy not only from a radiation and drug dosage standpoint, but also in terms of spatial and temporal optimization of treatment schedules.

  1. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  2. Logic flowgraph methodology - A tool for modeling embedded systems

    Science.gov (United States)

    Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.

    1991-01-01

    The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.

  3. Methodology and preliminary models for analyzing nuclear-safeguards decisions

    International Nuclear Information System (INIS)

    Judd, B.R.; Weissenberger, S.

    1978-11-01

    This report describes a general analytical tool designed with Lawrence Livermore Laboratory to assist the Nuclear Regulatory Commission in making nuclear safeguards decisions. The approach is based on decision analysis - a quantitative procedure for making decisions under uncertain conditions. The report: describes illustrative models that quantify the probability and consequences of diverted special nuclear material and the costs of safeguarding the material; demonstrates a methodology for using this information to set safeguards regulations (safeguards criteria); and summarizes insights gained in a very preliminary assessment of a hypothetical reprocessing plant

  4. Methodology and preliminary models for analyzing nuclear safeguards decisions

    International Nuclear Information System (INIS)

    1978-11-01

    This report describes a general analytical tool designed to assist the NRC in making nuclear safeguards decisions. The approach is based on decision analysis--a quantitative procedure for making decisions under uncertain conditions. The report: describes illustrative models that quantify the probability and consequences of diverted special nuclear material and the costs of safeguarding the material, demonstrates a methodology for using this information to set safeguards regulations (safeguards criteria), and summarizes insights gained in a very preliminary assessment of a hypothetical reprocessing plant

  5. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  6. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  7. Self-esteem Is Mostly Stable Across Young Adulthood: Evidence from Latent STARTS Models.

    Science.gov (United States)

    Wagner, Jenny; Lüdtke, Oliver; Trautwein, Ulrich

    2016-08-01

    How stable is self-esteem? This long-standing debate has led to different conclusions across different areas of psychology. Longitudinal data and up-to-date statistical models have recently indicated that self-esteem has stable and autoregressive trait-like components and state-like components. We applied latent STARTS models with the goal of replicating previous findings in a longitudinal sample of young adults (N = 4,532; Mage  = 19.60, SD = 0.85; 55% female). In addition, we applied multigroup models to extend previous findings on different patterns of stability for men versus women and for people with high versus low levels of depressive symptoms. We found evidence for the general pattern of a major proportion of stable and autoregressive trait variance and a smaller yet substantial amount of state variance in self-esteem across 10 years. Furthermore, multigroup models suggested substantial differences in the variance components: Females showed more state variability than males. Individuals with higher levels of depressive symptoms showed more state and less autoregressive trait variance in self-esteem. Results are discussed with respect to the ongoing trait-state debate and possible implications of the group differences that we found in the stability of self-esteem. © 2015 Wiley Periodicals, Inc.

  8. MoPCoM Methodology: Focus on Models of Computation

    Science.gov (United States)

    Koudri, Ali; Champeau, Joël; Le Lann, Jean-Christophe; Leilde, Vincent

    Today, developments of Real Time Embedded Systems have to face new challenges. On the one hand, Time-To-Market constraints require a reliable development process allowing quick design space exploration. On the other hand, rapidly developing technology, as stated by Moore's law, requires techniques to handle the resulting productivity gap. In a previous paper, we have presented our Model Based Engineering methodology addressing those issues. In this paper, we make a focus on Models of Computation design and analysis. We illustrate our approach on a Cognitive Radio System development implemented on an FPGA. This work is part of the MoPCoM research project gathering academic and industrial organizations (http://www.mopcom.fr).

  9. A methodology for modeling barrier island storm-impact scenarios

    Science.gov (United States)

    Mickey, Rangley C.; Long, Joseph W.; Plant, Nathaniel G.; Thompson, David M.; Dalyander, P. Soupy

    2017-02-16

    A methodology for developing a representative set of storm scenarios based on historical wave buoy and tide gauge data for a region at the Chandeleur Islands, Louisiana, was developed by the U.S. Geological Survey. The total water level was calculated for a 10-year period and analyzed against existing topographic data to identify when storm-induced wave action would affect island morphology. These events were categorized on the basis of the threshold of total water level and duration to create a set of storm scenarios that were simulated, using a high-fidelity, process-based, morphologic evolution model, on an idealized digital elevation model of the Chandeleur Islands. The simulated morphological changes resulting from these scenarios provide a range of impacts that can help coastal managers determine resiliency of proposed or existing coastal structures and identify vulnerable areas within those structures.

  10. A ROADMAP FOR GENERATING SEMANTICALLY ENRICHED BUILDING MODELS ACCORDING TO CITYGML MODEL VIA TWO DIFFERENT METHODOLOGIES

    Directory of Open Access Journals (Sweden)

    G. Floros

    2016-10-01

    Full Text Available The methodologies of 3D modeling techniques have increasingly increased due to the rapid advances of new technologies. Nowadays, the focus of 3D modeling software is focused, not only to the finest visualization of the models, but also in their semantic features during the modeling procedure. As a result, the models thus generated are both realistic and semantically enriched. Additionally, various extensions of modeling software allow for the immediate conversion of the model’s format, via semi-automatic procedures with respect to the user’s scope. The aim of this paper is to investigate the generation of a semantically enriched Citygml building model via two different methodologies. The first methodology includes the modeling in Trimble SketchUp and the transformation in FME Desktop Manager, while the second methodology includes the model’s generation in CityEngine and its transformation in the CityGML format via the 3DCitiesProject extension for ArcGIS. Finally, the two aforesaid methodologies are being compared and specific characteristics are evaluated, in order to infer the methodology that is best applied depending on the different projects’ purposes.

  11. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models: Stop Developing, Start Evaluating.

    Science.gov (United States)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-04-01

    Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs. After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs. Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Spatial Development Modeling Methodology Application Possibilities in Vilnius

    Directory of Open Access Journals (Sweden)

    Lina Panavaitė

    2017-05-01

    Full Text Available In order to control the continued development of high-rise buildings and their irreversible visual impact on the overall silhouette of the city, the great cities of the world introduced new methodological principles to city’s spatial development models. These methodologies and spatial planning guidelines are focused not only on the controlled development of high-rise buildings, but on the spatial modelling of the whole city by defining main development criteria and estimating possible consequences. Vilnius city is no exception, however the re-establishment of independence of Lithuania caused uncontrolled urbanization process, so most of the city development regulations emerged as a consequence of unmanaged processes of investors’ expectations legalization. The importance of consistent urban fabric as well as conservation and representation of city’s most important objects gained attention only when an actual threat of overshadowing them with new architecture along with unmanaged urbanization in the city center or urban sprawl at suburbia, caused by land-use projects, had emerged. Current Vilnius’ spatial planning documents clearly define urban structure and key development principles, however the definitions are relatively abstract, causing uniform building coverage requirements for territories with distinct qualities and simplifying planar designs which do not meet quality standards. The overall quality of urban architecture is not regulated. The article deals with current spatial modeling methods, their individual parts, principles, the criteria for quality assessment and their applicability in Vilnius. The text contains an outline of possible building coverage regulations and impact assessment criteria for new development. The article contains a compendium of requirements for high-quality spatial planning and building design.

  13. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    Science.gov (United States)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  14. Developing the Business Model – a Methodology for Virtual Enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a methodology to develop Virtual Enterprises (VEs). This Virtual Enterprise Methodology (VEM) outlines activities to consider when setting up and managing virtual enterprises. As a methodology the VEM helps companies to ask the right questions when preparing for, and setting...

  15. Modeling methodology for supply chain synthesis and disruption analysis

    Science.gov (United States)

    Wu, Teresa; Blackhurst, Jennifer

    2004-11-01

    The concept of an integrated or synthesized supply chain is a strategy for managing today's globalized and customer driven supply chains in order to better meet customer demands. Synthesizing individual entities into an integrated supply chain can be a challenging task due to a variety of factors including conflicting objectives, mismatched incentives and constraints of the individual entities. Furthermore, understanding the effects of disruptions occurring at any point in the system is difficult when working toward synthesizing supply chain operations. Therefore, the goal of this research is to present a modeling methodology to manage the synthesis of a supply chain by linking hierarchical levels of the system and to model and analyze disruptions in the integrated supply chain. The contribution of this research is threefold: (1) supply chain systems can be modeled hierarchically (2) the performance of synthesized supply chain system can be evaluated quantitatively (3) reachability analysis is used to evaluate the system performance and verify whether a specific state is reachable, allowing the user to understand the extent of effects of a disruption.

  16. A methodology for ecosystem-scale modeling of selenium.

    Science.gov (United States)

    Presser, Theresa S; Luoma, Samuel N

    2010-10-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determine how Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  17. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  18. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  19. Development of a General Modelling Methodology for Vacuum Residue Hydroconversion

    Directory of Open Access Journals (Sweden)

    Pereira de Oliveira L.

    2013-11-01

    Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.

  20. Methodology for assessing electric vehicle charging infrastructure business models

    International Nuclear Information System (INIS)

    Madina, Carlos; Zamora, Inmaculada; Zabala, Eduardo

    2016-01-01

    The analysis of economic implications of innovative business models in networked environments, as electro-mobility is, requires a global approach to ensure that all the involved actors obtain a benefit. Although electric vehicles (EVs) provide benefits for the society as a whole, there are a number of hurdles for their widespread adoption, mainly the high investment cost for the EV and for the infrastructure. Therefore, a sound business model must be built up for charging service operators, which allows them to recover their costs while, at the same time, offer EV users a charging price which makes electro-mobility comparable to internal combustion engine vehicles. For that purpose, three scenarios are defined, which present different EV charging alternatives, in terms of charging power and charging station ownership and accessibility. A case study is presented for each scenario and the required charging station usage to have a profitable business model is calculated. We demonstrate that private home charging is likely to be the preferred option for EV users who can charge at home, as it offers a lower total cost of ownership under certain conditions, even today. On the contrary, finding a profitable business case for fast charging requires more intensive infrastructure usage. - Highlights: • Ecosystem is a network of actors who collaborate to create a positive business case. • Electro-mobility (electricity-powered road vehicles and ICT) is a complex ecosystem. • Methodological analysis to ensure that all actors benefit from electro-mobility. • Economic analysis of charging infrastructure deployment linked to its usage. • Comparison of EV ownership cost vs. ICE for vehicle users.

  1. Combining prior knowledge with data-driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, P.F.; van Lith, Pascal F.; Betlem, Bernardus H.L.; Roffel, B.

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  2. Modeling myocardial infarction in mice: methodology, monitoring, pathomorphology.

    Science.gov (United States)

    Ovsepyan, A A; Panchenkov, D N; Prokhortchouk, E B; Telegin, G B; Zhigalova, N A; Golubev, E P; Sviridova, T E; Matskeplishvili, S T; Skryabin, K G; Buziashvili, U I

    2011-01-01

    Myocardial infarction is one of the most serious and widespread diseases in the world. In this work, a minimally invasive method for simulating myocardial infarction in mice is described in the Russian Federation for the very first time; the procedure is carried out by ligation of the coronary heart artery or by controlled electrocoagulation. As a part of the methodology, a series of anesthetic, microsurgical and revival protocols are designed, owing to which a decrease in the postoperational mortality from the initial 94.6 to 13.6% is achieved. ECG confirms the development of large-focal or surface myocardial infarction. Postmortal histological examination confirms the presence of necrosis foci in the heart muscles of 87.5% of animals. Altogether, the medical data allow us to conclude that an adequate mouse model for myocardial infarction was generated. A further study is focused on the standardization of the experimental procedure and the use of genetically modified mouse strains, with the purpose of finding the most efficient therapeutic approaches for this disease.

  3. A methodology model for quality management in a general hospital.

    Science.gov (United States)

    Stern, Z; Naveh, E

    1997-01-01

    A reappraisal is made of the relevance of industrial modes of quality management to the issues of medical care. Analysis of the nature of medical care, which differentiates it from the supplier-client relationships of industry, presents the main intrinsic characteristics, which create problems in application of the industrial quality management approaches to medical care. Several examples are the complexity of the relationship between the medical action and the result obtained, the client's nonacceptance of economic profitability as a value in his medical care, and customer satisfaction biased by variable standards of knowledge. The real problems unique to hospitals are addressed, and a methodology model for their quality management is offered. Included is a sample of indicator vectors, measurements of quality care, cost of medical care, quality of service, and human resources. These are based on the trilogy of planning quality, quality control, and improving quality. The conclusions confirm the inadequacy of industrial quality management approaches for medical institutions and recommend investment in formulation of appropriate concepts.

  4. High-Fidelity Modelling Methodology of Light-Limited Photosynthetic Production in Microalgae.

    Directory of Open Access Journals (Sweden)

    Andrea Bernardi

    Full Text Available Reliable quantitative description of light-limited growth in microalgae is key to improving the design and operation of industrial production systems. This article shows how the capability to predict photosynthetic processes can benefit from a synergy between mathematical modelling and lab-scale experiments using systematic design of experiment techniques. A model of chlorophyll fluorescence developed by the authors [Nikolaou et al., J Biotechnol 194:91-99, 2015] is used as starting point, whereby the representation of non-photochemical-quenching (NPQ process is refined for biological consistency. This model spans multiple time scales ranging from milliseconds to hours, thus calling for a combination of various experimental techniques in order to arrive at a sufficiently rich data set and determine statistically meaningful estimates for the model parameters. The methodology is demonstrated for the microalga Nannochloropsis gaditana by combining pulse amplitude modulation (PAM fluorescence, photosynthesis rate and antenna size measurements. The results show that the calibrated model is capable of accurate quantitative predictions under a wide range of transient light conditions. Moreover, this work provides an experimental validation of the link between fluorescence and photosynthesis-irradiance (PI curves which had been theoricized.

  5. Widespread confusion in the modeling of Europa's magnetospheric interaction: what the potential modeler should consider before getting started

    Science.gov (United States)

    Cassidy, T. A.

    2016-12-01

    Understanding Europa's interaction with Jupiter's magnetosphere is a critical part of the ocean sounding objective of the upcoming Clipper mission and interpretation of these observations will require modeling efforts that build upon studies done over the last four decades. Unfortunately, these studies are often confusing and contradictory. There is, as yet, no community consensus on the assumptions and parameters that go into such models. There is enough uncertainty in this problem that I cannot tell anyone, with certainty, what they should and should not do, but in this presentation I will outline what modelers should at least consider before starting. The most important consideration that is often missing in the literature is plasma flow diversion. Many papers assume that Europa's interaction is lunar-like; a completely absorbing barrier. Data and plasma models show that there is likely significant diversion, and such diversion could prevent the bulk of magnetospheric plasa particles from reaching the surface. On the other hand, some models likely overestimate the amount of diversion: we do know that a significant amount of plasma must reach the surface in order to produce the atmosphere via sputtering and radiolysis, but most plasma models usually treat the atmosphere as a fixed boundary condition. A second consideration is energy range of particles responsible for radiolysis and sputtering. The particles bombarding the surface range from thermal plasma (eV-keV) to non-thermal (keV-MeV), but to many modelers, it seems, these are indistinguishable despite drastic differences in how these populations interact with the moon. To illustrate this confusion, the attached figure shows the O2 source rate (O2 is likely the dominant atmospheric component) from published Europa atmosphere/plasma models. There is little agreement on either the source rate or the particle population responsible for its production. Finally, I will discuss how experiences with other planetary

  6. High level models and methodologies for information systems

    CERN Document Server

    Isaias, Pedro

    2014-01-01

    This book introduces methods and methodologies in Information Systems (IS) by presenting, describing, explaining, and illustrating their uses in various contexts, including website development, usability evaluation, quality evaluation, and success assessment.

  7. Cost Offset Associated With Early Start Denver Model for Children With Autism.

    Science.gov (United States)

    Cidav, Zuleyha; Munson, Jeff; Estes, Annette; Dawson, Geraldine; Rogers, Sally; Mandell, David

    2017-09-01

    To determine the effect of the Early Start Denver Model (ESDM) for treatment of young children with autism on health care service use and costs. We used data from a randomized trial that tested the efficacy of the ESDM, which is based on developmental and applied behavioral analytic principles and delivered by trained therapists and parents, for 2 years. Parents were interviewed about their children's service use every 6 months from the onset of the intervention to follow-up (age 6 years). The sample for this study consisted of 39 children with autism who participated in the original randomized trial at age 18 to 30 months, and were also assessed at age 6 years. Of this sample, 21 children were in the ESDM group, and 18 children were in the community care (COM) group. Reported services were categorized and costed by applying unit hourly costs. Annualized service use and costs during the intervention and post intervention for the two study arms were compared. During the intervention, children who received the ESDM had average annualized total health-related costs that were higher by about $14,000 than those of children who received community-based treatment. The higher cost of ESDM was partially offset during the intervention period because children in the ESDM group used less applied behavior analysis (ABA)/early intensive behavioral intervention (EIBI) and speech therapy services than children in the comparison group. In the postintervention period, compared with children who had earlier received treatment as usual in community settings, children in the ESDM group used less ABA/EIBI, occupational/physical therapy, and speech therapy services, resulting in significant cost savings in the amount of about $19,000 per year per child. Costs associated with ESDM treatment were fully offset within a few years after the intervention because of reductions in other service use and associated costs. Early Characteristics of Autism; http://clinicaltrials.gov/; NCT0009415

  8. A Study on Uncertainty Quantification of Reflood Model using CIRCE Methodology

    International Nuclear Information System (INIS)

    Jeon, Seongsu; Hong, Soonjoon; Oh, Deogyeon; Bang, Youngseok

    2013-01-01

    The CIRCE method is intended to quantify the uncertainties of the correlations of a code. It may replace the expert judgment generally used. In this study, an uncertainty quantification of reflood model was performed using CIRCE methodology. In this paper, the application process of CIRCE methodology and main results are briefly described. This research is expected to be useful to improve the present audit calculation methodology, KINS-REM. In this study, an uncertainty quantification of reflood model was performed using CIRCE methodology. The application of CIRCE provided the satisfactory results. This research is expected to be useful to improve the present audit calculation methodology, KINS-REM

  9. Parenting Classes, Parenting Behavior, and Child Cognitive Development in Early Head Start: A Longitudinal Model

    Science.gov (United States)

    Chang, Mido; Park, Boyoung; Kim, Sunha

    2009-01-01

    This study analyzed Early Head Start Research and Evaluation (EHSRE) study data, examining the effect of parenting classes on parenting behaviors and children's cognitive outcomes. The study analyzed three sets of dependent variables: parental language and cognitive stimulation, parent-child interactive activities, and the Bayley Mental…

  10. Getting started with the model for improvement: psychology and leadership in quality improvement.

    Science.gov (United States)

    Pratap, J Nick; Varughese, Anna M; Adler, Elena; Kurth, C Dean

    2013-02-01

    Although the case for quality in hospitals is compelling, doctors are often uncertain how to achieve it. This article forms the third and final part of a series providing practical guidance on getting started with a first quality improvement project. Introduction.

  11. A barrier for low frequency noise from starting aircraft: comparison between numerical and scale model results

    NARCIS (Netherlands)

    Bosschaart, C.; Eisses, A.R.; Eerden, F.J.M. van der

    2010-01-01

    Amsterdam Airport Schiphol has organized a competition to design a noise mitigating measure along the 'Polderbaan' runway. Its main goal is to reduce the low frequency (LF) ground noise from starting aircraft. The winning concept is a flexible parabolic shaped noise barrier positioned relatively

  12. Reference Management Methodologies for Large Structural Models at Kennedy Space Center

    Science.gov (United States)

    Jones, Corey; Bingham, Ryan; Schmidt, Rick

    2011-01-01

    There have been many challenges associated with modeling some of NASA KSC's largest structures. Given the size of the welded structures here at KSC, it was critically important to properly organize model struc.ture and carefully manage references. Additionally, because of the amount of hardware to be installed on these structures, it was very important to have a means to coordinate between different design teams and organizations, check for interferences, produce consistent drawings, and allow for simple release processes. Facing these challenges, the modeling team developed a unique reference management methodology and model fidelity methodology. This presentation will describe the techniques and methodologies that were developed for these projects. The attendees will learn about KSC's reference management and model fidelity methodologies for large structures. The attendees will understand the goals of these methodologies. The attendees will appreciate the advantages of developing a reference management methodology.

  13. Early Start Denver Model - intervention for de helt små børn med autisme

    DEFF Research Database (Denmark)

    Brynskov, Cecilia

    2015-01-01

    Early Start Denver Model (ESDM) er en autismespecifik interventionsmetode, som er udviklet til helt små børn med autisme (0-4 år). Metoden fokuserer på at styrke den tidlige kontakt og barnets motivation, og den arbejder målrettet med de socio-kommunikative forløbere for sprog og med den tidlige...

  14. HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, B; Donald L. Anton, D

    2008-12-22

    There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media.

  15. Evaluating Biosphere Model Estimates of the Start of the Vegetation Active Season in Boreal Forests by Satellite Observations

    Directory of Open Access Journals (Sweden)

    Kristin Böttcher

    2016-07-01

    Full Text Available The objective of this study was to assess the performance of the simulated start of the photosynthetically active season by a large-scale biosphere model in boreal forests in Finland with remote sensing observations. The start of season for two forest types, evergreen needle- and deciduous broad-leaf, was obtained for the period 2003–2011 from regional JSBACH (Jena Scheme for Biosphere–Atmosphere Hamburg runs, driven with climate variables from a regional climate model. The satellite-derived start of season was determined from daily Moderate Resolution Imaging Spectrometer (MODIS time series of Fractional Snow Cover and the Normalized Difference Water Index by applying methods that were targeted to the two forest types. The accuracy of the satellite-derived start of season in deciduous forest was assessed with bud break observations of birch and a root mean square error of seven days was obtained. The evaluation of JSBACH modelled start of season dates with satellite observations revealed high spatial correspondence. The bias was less than five days for both forest types but showed regional differences that need further consideration. The agreement with satellite observations was slightly better for the evergreen than for the deciduous forest. Nonetheless, comparison with gross primary production (GPP determined from CO2 flux measurements at two eddy covariance sites in evergreen forest revealed that the JSBACH-simulated GPP was higher in early spring and led to too-early simulated start of season dates. Photosynthetic activity recovers differently in evergreen and deciduous forests. While for the deciduous forest calibration of phenology alone could improve the performance of JSBACH, for the evergreen forest, changes such as seasonality of temperature response, would need to be introduced to the photosynthetic capacity to improve the temporal development of gross primary production.

  16. ENTREPRENEURIAL ATTITUDE AND STUDENTS BUSINESS START-UP INTENTION: A PARTIAL LEAST SQUARE MODELING

    Directory of Open Access Journals (Sweden)

    Widayat Widayat

    2017-03-01

    Full Text Available This article is designed to examine the role of the entrepreneurial spirit, education and in build­ing­ an attitude about working as an entrepreneur, and his influence on the intention to start a business, to the students. Data were collected using a questionnaire has been prepared and maintained the validity and relia­bility. Questionnaires given to the respondent students were selected as samples at several universi­ti­es in Malang, East Java, Indonesia. The collected data were analyzed by using Partial Least Square. The a­­na­­ly­sis showed entrepreneurial spirit and education contribute to the formation of entrepreneurial atti­tu­des. Attitudes are formed encourage entrepreneurship intentions to start a business significantly.

  17. A Warm-Started Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca

    2013-01-01

    algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations......In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...

  18. Modeling the Cloud: Methodology for Cloud Computing Strategy and Design

    Science.gov (United States)

    2011-05-17

    roadmap” 4. Leverage an enterprise architecture methodology, such as TOGAF and/or DODAF, to build integrated artifacts 5. Extend the business and...the Patriot Act. - 41 - Transition Planning - 42 - Transition Planning: Leveraging TOGAF Phases E, F, G &H: • Opportunities and Solutions

  19. Why START?

    International Nuclear Information System (INIS)

    Mendelsohn, J.

    1991-01-01

    Barring some major unexpected downturn in US-Soviet relations, it seems likely that the long-awaited Strategic Arms Reduction Talks (START) treaty will be signed sometime in 1991. Under negotiation for the past nine years, public acceptance and Senate approval of a START treaty will be facilitated by the generally less confrontational East-West relationship which has evolved over that time, by the growing constraints on the US defense budget, and by the obvious merits of the treaty itself. Not only will the nearly complete START treaty be an extremely useful and powerful arms control agreement, it is also decidedly advantageous to US security interests. First and foremost, a START treaty will cap and reduce the steady buildup of nuclear weapons that has characterized the last 30 years of the US-Soviet strategic relationship. As a result of the basic outline originally agreed to at the Reykjavik summit, START will take a 25 to 35 percent bite out of existing nuclear arsenals, impose approximately a 50 percent cut in overall Soviet ballistic missile warheads and throw-weight (lifting power or payload capacity), and produce an exact 50 percent cut in Soviet SS-18 missiles

  20. Research methodology workshops evaluation using the Kirkpatrick's model: translating theory into practice.

    Science.gov (United States)

    Abdulghani, Hamza Mohammad; Shaik, Shaffi Ahamed; Khamis, Nehal; Al-Drees, Abdulmajeed Abdulrahman; Irshad, Mohammad; Khalil, Mahmoud Salah; Alhaqwi, Ali Ibrahim; Isnani, Arthur

    2014-04-01

    Qualitative and quantitative evaluation of academic programs can enhance the development, effectiveness, and dissemination of comparative quality reports as well as quality improvement efforts. To evaluate the five research methodology workshops through assessing participants' satisfaction, knowledge and skills gain and impact on practices by the Kirkpatrick's evaluation model. The four level Kirkpatrick's model was applied for the evaluation. Training feedback questionnaires, pre and post tests, learner development plan reports and behavioral surveys were used to evaluate the effectiveness of the workshop programs. Of the 116 participants, 28 (24.1%) liked with appreciation, 62 (53.4%) liked with suggestions and 26 (22.4%) disliked the programs. Pre and post MCQs tests mean scores showed significant improvement of relevant basic knowledge and cognitive skills by 17.67% (p ≤ 0.005). Pre-and-post tests scores on workshops sub-topics also significantly improved for the manuscripts (p ≤ 0.031) and proposal writing (p ≤ 0.834). As for the impact, 56.9% of participants started research, and 6.9% published their studies. The results from participants' performance revealed an overall positive feedback and 79% of participant reported transfer of training skills at their workplace. The course outcomes achievement and suggestions given for improvements offer insight into the program which were encouraging and very useful. Encouraging "research culture" and work-based learning are probably the most powerful determinants for research promotion. These findings therefore encourage faculty development unit to continue its training and development in the research methodology aspects.

  1. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A modelling study to evaluate the costs and effects of lowering the starting age of population breast cancer screening.

    Science.gov (United States)

    Koleva-Kolarova, Rositsa G; Daszczuck, Alicja M; de Jonge, Chris; Abu Hantash, Mohd Kahlil; Zhan, Zhuozhao Z; Postema, Erik Jan; Feenstra, Talitha L; Pijnappel, Ruud M; Greuter, Marcel J W; de Bock, Geertruida H

    2018-03-01

    Because the incidence of breast cancer increases between 45 and 50years of age, a reconsideration is required of the current starting age (typically 50years) for routine mammography. Our aim was to evaluate the quantitative benefits, harms, and cost-effectiveness of lowering the starting age of breast cancer screening in the Dutch general population. Economic modelling with a lifelong perspective compared biennial screening for women aged 48-74years and for women aged 46-74years with the current Dutch screening programme, which screen women between the ages of 50 and 74years. Tumour deaths prevented, years of life saved (YOLS), false-positive rates, radiation-induced tumours, costs and incremental cost-effectiveness ratios (ICERs) were evaluated. Starting the screening at 48 instead of 50 years of age led to increases in: the number of small tumours detected (4.0%), tumour deaths prevented (5.6%), false positives (9.2%), YOLS (5.6%), radiation-induced tumours (14.7%), and costs (4.1%). Starting the screening at 46 instead of 48 years of age increased the number of small tumours detected (3.3%), tumour deaths prevented (4.2%), false positives (8.8%), YOLS (3.7%), radiation-induced tumours (15.2%), and costs (4.0%). The ICER was €5600/YOLS for the 48-74 scenario and €5600/YOLS for the 46-74 scenario. Women could benefit from lowering the starting age of screening as more breast cancer deaths would be averted. Starting regular breast cancer screening earlier is also cost-effective. As the number of additional expected harms is relatively small in both the scenarios examined, and the difference in ICERs is not large, introducing two additional screening rounds is justifiable. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Early Start Denver Model. Un modello Evidence Based per l’intervento educativo precoce nei Disturbi dello Spettro Autistico

    Directory of Open Access Journals (Sweden)

    Saverio Fontani

    2016-07-01

    Full Text Available The Autism Spectrum Disorders represents one of the most complex developmental disabilities for the massive deficit of communication competences. The social disability related to disorders is the main objective of the intervention of the Early Start Denver Model – ESDM (Rogers & Dawson, 2010, which can be considered as one of the most advanced models for early educational intervention according the perspective of Evidence Based Education. In this paper the theoretical foundations of the model are presented and its implications for a modern inclusive education are discussed..

  4. MODEL PENGEMBANGAN MODIFIKASI START BLOCK UNTUK PEMBELAJARAN ATLETIK LARI SPRINT PADA SISWA SD NEGERI BINTORO 2 DEMAK

    Directory of Open Access Journals (Sweden)

    Tri Wulandari

    2015-11-01

    Full Text Available The purpose of this study is to produce a modified starting block for the development of learning athletics sprints for Elementary School students Bintoro 2 Demak. This study uses a model of development which refers to the development of Borg & Gall.The data analysis technique used is descriptive percentages. From the test results obtained by a small group of expert evaluation data, namely, expert penjas 95% (very good, a study 92.5% (excellent, the average results of the questionnaire on a small group of test results obtained 86% (good . The average yield at the end of the questionnaire test results obtained large group 93.2% (very good. Conclusion that the model of development of modified starting blocks for sprinting athletics lessons for elementary students Bintoro N 2 Demak deserves used to study athletic training run for the Sprint and elementary school athletes.

  5. A methodology for the parametric modelling of the flow coefficients and flow rate in hydraulic valves

    International Nuclear Information System (INIS)

    Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas

    2014-01-01

    Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems

  6. Validation of multi-body modelling methodology for reconfigurable underwater robots

    DEFF Research Database (Denmark)

    Nielsen, M.C.; Eidsvik, O. A.; Blanke, Mogens

    2016-01-01

    This paper investigates the problem of employing reconfigurable robots in an underwater setting. The main results presented is the experimental validation of a modelling methodology for a system consisting of N dynamically connected robots with heterogeneous dynamics. Two distinct types...... of experiments are performed, a series of hydrostatic free-decay tests and a series of open-loop trajectory tests. The results are compared to a simulation based on the modelling methodology. The modelling methodology shows promising results for usage with systems composed of reconfigurable underwater modules....... The purpose of the model is to enable design of control strategies for cooperative reconfigurable underwater systems....

  7. Reducing Maladaptive Behaviors in Preschool-Aged Children with Autism Spectrum Disorder Using the Early Start Denver Model

    OpenAIRE

    Fulton, Elizabeth; Eapen, Valsamma; Črnčec, Rudi; Walter, Amelia; Rogers, Sally

    2014-01-01

    The presence of maladaptive behaviors in young people with autism spectrum disorder (ASD) can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM) treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an ...

  8. A methodology to support multidisciplinary model-based water management

    NARCIS (Netherlands)

    Scholten, H.; Kassahun, A.; Refsgaard, J.C.; Kargas, Th.; Gavardinas, C.; Beulens, A.J.M.

    2007-01-01

    Quality assurance in model based water management is needed because of some frequently perceived shortcomings, e.g. a lack of mutual understanding between modelling team members, malpractice and a tendency of modellers to oversell model capabilities. Initiatives to support quality assurance focus on

  9. Press Start

    Science.gov (United States)

    Harteveld, Casper

    This level sets the stage for the design philosophy called “Triadic Game Design” (TGD). This design philosophy can be summarized with the following sentence: it takes two to tango, but it takes three to design a meaningful game or a game with a purpose. Before the philosophy is further explained, this level will first delve into what is meant by a meaningful game or a game with a purpose. Many terms and definitions have seen the light and in this book I will specifically orient at digital games that aim to have an effect beyond the context of the game itself. Subsequently, a historical overview is given of the usage of games with a serious purpose which starts from the moment we human beings started to walk on our feet till our contemporary society. It turns out that we have been using games for all kinds of non-entertainment purposes for already quite a long time. With this introductory material in the back of our minds, I will explain the concept of TGD by means of a puzzle. After that, the protagonist of this book, the game Levee Patroller, is introduced. Based on the development of this game, the idea of TGD, which stresses to balance three different worlds, the worlds of Reality, Meaning, and Play, came into being. Interested? Then I suggest to quickly “press start!”

  10. A methodology for constructing the calculation model of scientific spreadsheets

    NARCIS (Netherlands)

    Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are

  11. Synthesis of semantic modelling and risk analysis methodology applied to animal welfare

    NARCIS (Netherlands)

    Bracke, M.B.M.; Edwards, S.A.; Metz, J.H.M.; Noordhuizen, J.P.T.M.; Algers, B.

    2008-01-01

    Decision-making on animal welfare issues requires a synthesis of information. For the assessment of farm animal welfare based on scientific information collected in a database, a methodology called `semantic modelling¿ has been developed. To date, however, this methodology has not been generally

  12. An interactive boundary layer modelling methodology for aerodynamic flows

    CSIR Research Space (South Africa)

    Smith, L

    2013-01-01

    Full Text Available -of-boundary layer flow, with the inviscid flow approximation: Continuity 0= ∂ ∂ j j u x ρ (1) Conservation of momentum (Newton’s second law) ( ) ( )             ∂ ∂ − ∂ ∂ + ∂ ∂ ∂ ∂ += ∂ ∂ + ∂ ∂ + ∂ ∂ ij k k i j j i j i i ji j i...-integral boundary layer solutions to a generic inviscid solver in an iterative fashion. Design/methodology/approach –The boundary layer solution is obtained using the two-integral method to solve displacement thickness point by point with a local Newton method...

  13. Diesel Engine Cold-Starting Studies: Optically Accessible Engine Experiments and Modeling

    National Research Council Canada - National Science Library

    Henein, Naeim

    1997-01-01

    .... The pre-ignition chemistry showed great sensitivity to the compressed air temperature. KIVA with a modified shell model responds accordingly to the change of inlet air temperatures and fuel injection parameters...

  14. MODEL AND METHOD FOR SYNTHESIS OF PROJECT MANAGEMENT METHODOLOGY WITH FUZZY INPUT DATA

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2016-02-01

    Full Text Available Literature analysis concerning the selection or creation a project management methodology is performed. Creating a "complete" methodology is proposed which can be applied to managing projects with any complexity, various degrees of responsibility for results and different predictability of the requirements. For the formation of a "complete" methodology, it is proposed to take the PMBOK standard as the basis, which would be supplemented by processes of the most demanding plan driven and flexible Agile Methodologies. For each knowledge area of the PMBOK standard, The following groups of processes should be provided: initiation, planning, execution, reporting, and forecasting, controlling, analysis, decision making and closing. The method for generating a methodology for the specific project is presented. The multiple criteria mathematical model and method aredeveloped for the synthesis of methodology when initial data about the project and its environment are fuzzy.

  15. Starting electronics

    CERN Document Server

    Brindley, Keith

    2005-01-01

    Starting Electronics is unrivalled as a highly practical introduction for hobbyists, students and technicians. Keith Brindley introduces readers to the functions of the main component types, their uses, and the basic principles of building and designing electronic circuits. Breadboard layouts make this very much a ready-to-run book for the experimenter; and the use of multimeter, but not oscilloscopes, puts this practical exploration of electronics within reach of every home enthusiast's pocket. The third edition has kept the simplicity and clarity of the original. New material

  16. to start

    Indian Academy of Sciences (India)

    Click here to start. Table of contents. Slide 1 · Slide 2 · Slide 3 · Slide 4 · Slide 5 · Slide 6 · Slide 7 · Slide 8 · Slide 9 · Slide 10 · Slide 11 · Slide 12 · Slide 13 · Slide 14 · Slide 15 · Slide 16 · Slide 17 · Slide 18 · Slide 19 · Slide 20 · Slide 21 · Slide 22 · Slide 23 · Slide 24 · Slide 25 · Slide 26 · Slide 27 · Slide 28 · Slide 29 · Slide 30.

  17. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  18. Bootstrap data methodology for sequential hybrid model building

    Science.gov (United States)

    Volponi, Allan J. (Inventor); Brotherton, Thomas (Inventor)

    2007-01-01

    A method for modeling engine operation comprising the steps of: 1. collecting a first plurality of sensory data, 2. partitioning a flight envelope into a plurality of sub-regions, 3. assigning the first plurality of sensory data into the plurality of sub-regions, 4. generating an empirical model of at least one of the plurality of sub-regions, 5. generating a statistical summary model for at least one of the plurality of sub-regions, 6. collecting an additional plurality of sensory data, 7. partitioning the second plurality of sensory data into the plurality of sub-regions, 8. generating a plurality of pseudo-data using the empirical model, and 9. concatenating the plurality of pseudo-data and the additional plurality of sensory data to generate an updated empirical model and an updated statistical summary model for at least one of the plurality of sub-regions.

  19. Interoperability Matter: Levels of Data Sharing, Starting from a 3d Information Modelling

    Science.gov (United States)

    Tommasi, C.; Achille, C.

    2017-02-01

    Nowadays, the adoption of BIM processes in the AEC (Architecture, Engineering and Construction) industry means to be oriented towards synergistic workflows, based on informative instruments capable of realizing the virtual model of the building. The target of this article is to speak about the interoperability matter, approaching the subject through a theoretical part and also a practice example, in order to show how these notions are applicable in real situations. In particular, the case study analysed belongs to the Cultural Heritage field, where it is possible to find some difficulties - both in the modelling and sharing phases - due to the complexity of shapes and elements. Focusing on the interoperability between different software, the questions are: What and how many kind of information can I share? Given that this process leads also to a standardization of the modelled parts, is there the possibility of an accuracy loss?

  20. Proposal for product development model focused on ce certification methodology

    Directory of Open Access Journals (Sweden)

    Nathalia Marcia Goulart Pinheiro

    2015-09-01

    Full Text Available This paper presents a critical analysis comparing 21 product development models in order to identify whether these structures meet the demands Product Certification of the European Community (CE. Furthermore, it presents a product development model, comprising the steps in the models analyzed, including improvements in activities for referred product certification. The proposed improvements are justified by the growing quest for the internationalization of products and processes within companies.

  1. Methodology for Modeling Building Energy Performance across the Commercial Sector

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, B.; Long, N.; Torcellini, P.; Judkoff, R.; Crawley, D.; Ryan, J.

    2008-03-01

    This report uses EnergyPlus simulations of each building in the 2003 Commercial Buildings Energy Consumption Survey (CBECS) to document and demonstrate bottom-up methods of modeling the entire U.S. commercial buildings sector (EIA 2006). The ability to use a whole-building simulation tool to model the entire sector is of interest because the energy models enable us to answer subsequent 'what-if' questions that involve technologies and practices related to energy. This report documents how the whole-building models were generated from the building characteristics in 2003 CBECS and compares the simulation results to the survey data for energy use.

  2. ITER central solenoid model coil heat treatment complete and assembly started

    International Nuclear Information System (INIS)

    Thome, R.J.; Okuno, K.

    1998-01-01

    A major R and D task in the ITER program is to fabricate a Superconducting Model Coil for the Central Solenoid to establish the design and fabrication methods for ITER size coils and to demonstrate conductor performance. Completion of its components is expected in 1998, to be followed by assembly with structural components and testing in a facility at JAERI

  3. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  4. Study of fuel control strategy based on an fuel behavior model for starting conditions; Nenryo kyodo model ni motozuita shidoji no nenryo hosei hosho ni tsuite no kosatsu

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Y.; Uchida, M.; Iwano, H.; Oba, H. [Nissan Motor Co. Ltd., Tokyo (Japan)

    1997-10-01

    We have applied a fuel behavior model to a fuel injection system which we call SOFIS (Sophisticated and Optimized Fuel Injection System) so that we get air/fuel ratio control accuracy and good driveability. However the fuel behavior under starting conditions is still not clear. To meet low emission rules and to get better driveability under starting conditions, better air/fuel ratio control is necessary. Now we have understood the ignition timing, injection timing, and injection pulse width required in such conditions. In former days, we analyzed the state of the air/fuel mixture under cold conditions and made a new fuel behavior model which considered fuel loss such as hydrocarbons and dissolution into oil and so on. Al this time, we have applied this idea to starting. We confirm this new model offers improved air/fuel ratio control. 6 refs., 9 figs., 3 tabs.

  5. Implementation of the Direct Torque Control (DTC) in current model, with current starting limiter

    OpenAIRE

    Mino Aguilar, Gerardo; Muñoz Hernández, German Ardul; Romeral Martínez, José Luis; Cortez, Liliana; Saynes Torres, J.

    2012-01-01

    This paper presents the scheme of Direct Torque Control (DTC) for induction motor drives, where flux and torque of the motor are estimated by the IM current model. Its scheme requires the knowledge of speed, rotor time constant and inductive parameters of the motor. The results prove the excellent characteristics for torque response and efficiency, which confirm the validity of this control scheme. Due to the rapid response offered by the DTC, this causes a high star current inversor protecti...

  6. The PROMETHEUS bundled payment experiment: slow start shows problems in implementing new payment models.

    Science.gov (United States)

    Hussey, Peter S; Ridgely, M Susan; Rosenthal, Meredith B

    2011-11-01

    Fee-for-service payment is blamed for many of the problems observed in the US health care system. One of the leading alternative payment models proposed in the Affordable Care Act of 2010 is bundled payment, which provides payment for all of the care a patient needs over the course of a defined clinical episode, instead of paying for each discrete service. We evaluated the initial "road test" of PROMETHEUS Payment, one of several bundled payment pilot projects. The project has faced substantial implementation challenges, and none of the three pilot sites had executed contracts or made bundled payments as of May 2011. The pilots have taken longer to set up than expected, primarily because of the complexity of the payment model and the fact that it builds on the existing fee-for-service payment system and other complexities of health care. Participants continue to see promise and value in the bundled payment model, but the pilot results suggest that the desired benefits of this and other payment reforms may take time and considerable effort to materialize.

  7. Terminology and methodology in modelling for water quality management

    DEFF Research Database (Denmark)

    Carstensen, J.; Vanrolleghem, P.; Rauch, W.

    1997-01-01

    of the most widely used terms for characterising models and within the process of model building. It is essential to the ever growing society of researchers within water quality management, that communication is eased by establishing a common terminology. This should not be done by giving broader definitions......There is a widespread need for a common terminology in modelling for water quality management. This paper points out sources of confusion in the communication between researchers due to misuse of existing terminology or use of unclear terminology. The paper attempts to clarify the context...

  8. Methodology for assessing electric vehicle charging infrastructure business models

    OpenAIRE

    Madina, Carlos; Zamora, Inmaculada; Zabala, Eduardo

    2016-01-01

    The analysis of economic implications of innovative business models in networked environments, as electro-mobility is, requires a global approach to ensure that all the involved actors obtain a benefit. Although electric vehicles (EVs) provide benefits for the society as a whole, there are a number of hurdles for their widespread adoption, mainly the high investment cost for the EV and for the infrastructure. Therefore, a sound business model must be built up for charging service operators, w...

  9. Box & Jenkins Model Identification:A Comparison of Methodologies

    Directory of Open Access Journals (Sweden)

    Maria Augusta Soares Machado

    2012-12-01

    Full Text Available This paper focuses on a presentation of a comparison of a neuro-fuzzy back propagation network and Forecast automatic model Identification to identify automatically Box & Jenkins non seasonal models.Recently some combinations of neural networks and fuzzy logic technologies have being used to deal with uncertain and subjective problems. It is concluded on the basis of the obtained results that this type of approach is very powerful to be used.

  10. Systematic reviews of animal models: methodology versus epistemology.

    Science.gov (United States)

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  11. A Comparative Study of Three Methodologies for Modeling Dynamic Stall

    Science.gov (United States)

    Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.

    2002-01-01

    During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.

  12. Methodologic model to scheduling on service systems: a software engineering approach

    Directory of Open Access Journals (Sweden)

    Eduyn Ramiro Lopez-Santana

    2016-06-01

    Full Text Available This paper presents an approach of software engineering to a research proposal to make an Expert System to scheduling on service systems using methodologies and processes of software development. We use the adaptive software development as methodology for the software architecture based on the description as a software metaprocess that characterizes the research process. We make UML’s diagrams (Unified Modeling Language to provide a visual modeling that describes the research methodology in order to identify the actors, elements and interactions in the research process.

  13. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments

  14. Modeling of a Small Transportation Company’s Start-Up with Limited Data during Economic Recession

    Directory of Open Access Journals (Sweden)

    Xiaoping Fang

    2013-01-01

    Full Text Available This paper presents a modeling method for analyzing a small transportation company’s start-up and growth during a global economic crisis which had an impact on China which is designed to help the owners make better investment and operating decisions with limited data. Since there is limited data, simple regression model and binary regression model failed to generate satisfactory results, so an additive periodic time series model was built to forecast business orders and income. Since the transportation market is segmented by business type and transportation distance, a polynomial model and logistic curve model were constructed to forecast the growth trend of each segmented transportation market, and the seasonal influence function was fitted by seasonal ratio method. Although both of the models produced satisfactory results and showed very nearly the same of goodness-of-fit in the sample, the logistic model presented better forecasting performance out of the sample therefore closer to the reality. Additionally, by checking the development trajectory of the case company’s business and the financial crisis in 2008, the modeling and analysis suggest that the sample company is affected by national macroeconomic factors such as GDP and import & export, and this effect comes with a time lag of one to two years.

  15. MODELING OF DYNAMIC PROCESSES IN PLANETARY IN-WHEEL MOTOR GEARBOXES OF MINE TRUCKS DURING ITS STARTING AND ACCELERATION

    Directory of Open Access Journals (Sweden)

    V. V. Mikhailov

    2012-01-01

    Full Text Available The paper describes a mathematical model for planetary double-row in-wheel motor gear box. Main parameters of its dynamic system have been determined in the paper. The paper reveals simulation of transition processes during starting and acceleration of a mine truck with electric motor wheels. Its own gear box frequency has been established theoretically and experimentally in the paper. The paper proposes an algorithm and program for calculations as an alternative to high-cost tests while investigating gear mechanism dynamics of large-size planetary gearboxes.

  16. Venture financing of start-ups: A model of contract between VC fund and entrepreneur

    Directory of Open Access Journals (Sweden)

    Osintsev Yury

    2010-01-01

    Full Text Available Venture capital has become one of the main sources of innovation in the modern, global economy. It is not just a substitute for bank loans: it has proven to be a more efficient way of financing projects at different stages. On one hand, venture financing allows for projects with higher risk, which leads to the possibility of higher returns on investment. On the other hand, venture investors who usually have managerial experience often participate in governing the business, which certainly adds value to the enterprise. In this paper we establish the model of contract between the venture capital fund and the entrepreneur, focusing on probably the most important issue of this contract: the shares of the parties in the business. The shares in the company determine the distribution of the joint surplus. The expected joint profits are not just exogenously specified in the contract but are dependent on the behavioral variables of both parties at the stage of fulfilling the contract. We call the behavioral variable of the entrepreneur ‘effort’ and the one of the venture fund ‘advice’. The probability of the project’s success, and hence the expected joint revenues, are increased by these two. However, both kinds of effort are costly to the respective parties that have made them. Based on this fact we can elaborate the profit functions of both sides of the contract. Our model can be considered as a basis for specifying contracts concerning venture financing. It can provide the logic for how the equilibrium shares of entrepreneur and venture fund are obtained.

  17. A Roadmap for Generating Semantically Enriched Building Models According to CityGML Model via Two Different Methodologies

    Science.gov (United States)

    Floros, G.; Solou, D.; Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    The methodologies of 3D modeling techniques have increasingly increased due to the rapid advances of new technologies. Nowadays, the focus of 3D modeling software is focused, not only to the finest visualization of the models, but also in their semantic features during the modeling procedure. As a result, the models thus generated are both realistic and semantically enriched. Additionally, various extensions of modeling software allow for the immediate conversion of the model's format, via semi-automatic procedures with respect to the user's scope. The aim of this paper is to investigate the generation of a semantically enriched Citygml building model via two different methodologies. The first methodology includes the modeling in Trimble SketchUp and the transformation in FME Desktop Manager, while the second methodology includes the model's generation in CityEngine and its transformation in the CityGML format via the 3DCitiesProject extension for ArcGIS. Finally, the two aforesaid methodologies are being compared and specific characteristics are evaluated, in order to infer the methodology that is best applied depending on the different projects' purposes.

  18. An improved methodology for dynamic modelling and simulation of ...

    Indian Academy of Sciences (India)

    This presents a real struggle to the engineers who want to design and implement such systems with high performance, efficiency and reliability. For this purpose, engineers need a tool capable of modelling and/or simulating components of diverse nature within the ECDS. However, a majority of the available tools are limited ...

  19. A branch-and-bound methodology within algebraic modelling systems

    NARCIS (Netherlands)

    Bisschop, J.J.; Heerink, J.B.J.; Kloosterman, G.

    1998-01-01

    Through the use of application-specific branch-and-bound directives it is possible to find solutions to combinatorial models that would otherwise be difficult or impossible to find by just using generic branch-and-bound techniques within the framework of mathematical programming. {\\sc Minto} is an

  20. FORMATION OF THE PROJECT CONCEPT OF START PRE ACCELERATOR BY WAY OF BUSINESS MODEL BUILDING

    Directory of Open Access Journals (Sweden)

    Роман Васильович ФЕЩУР

    2016-02-01

    Full Text Available We have described the concept of the project of enterprise functioning, the main activity of which is the creation and implementation of training programs at the pre-investment stage and commercialize creative startups to represent them in the form attractive to investors. We also formed a flexible business model of enterprise-operator pre-accelerator at the base of Template A. Osterwalder. We have discovered the essence of the value proposition of enterprise-pre-accelerator. The company interaction with customers, among the main ones is: holders of startups, coworking, outsourcing companies, universities, "business angels" and sponsors were described. We have disclosed the role of human resources that create and convey to customers information about developed valuable suggestions. The basic activities of created business - production, customer satisfaction and building a platform (network were given. The risks of the project were found and their quantitative assessment was given. We noted that the successful implementation of each startup provides the appearance of not only economic but also social impact - creating new jobs, workers are provided with decent wages.

  1. A new conservation strategy for China-A model starting with primates.

    Science.gov (United States)

    Pan, Ruliang; Oxnard, Charles; Grueter, Cyril C; Li, Baoguo; Qi, Xiaoguang; He, Gang; Guo, Songtao; Garber, Paul A

    2016-11-01

    Although the evolutionary history of primates in China dates to the Eocene, and includes major radiations of lorisids, hominoids, cercopithecines, and colobines during the Miocene, Pliocene, and Pleistocene, extensive human-induced habitat change and deforestation over the past few centuries has resulted in 22 of 25 extant species listed as threatened or endangered, and two species of gibbons extirpated in the last few years. This commentary briefly reviews factors that have contributed to the decline of primates in China over the past 400 years, and in particular how major social events and economic development in modern China have resulted in unsustainable environmental change. In response, we describe our efforts to develop a strategic scientific, educational and conservation partnership in China, focusing on primates, in which GIS technology will be used to integrate geographical profiles, climatic information, and changes in land use patterns and human and nonhuman primate distributions to highlight issues of immediate concern and to develop priority-based conservation solutions. Our goal is to evaluate how human-induced environmental change has impacted primates over time and to predict the likelihood of primate population extinctions in the near future. This model represents an early warning system that will be widely available to the Chinese government, public, educational institutions, researchers, and NGOs through social media and educational videos in order to arouse public awareness and promote wildlife conservation. We encourage colleagues across a broad range of academic disciplines, political ideologies, and the public to help move this strategy into reality, the sooner the better. Am. J. Primatol. 78:1137-1148, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. K-Means Subject Matter Expert Refined Topic Model Methodology

    Science.gov (United States)

    2017-01-01

    2003) propose is the more widely accepted method for clustering unsupervised images or text documents. In our work, we focus on the method of...Numbers Interacting with Forms Modeling &Testing with Images Evaluating with KL Distances Fitting K-Means and KSMERT Pareto Charting Retrieving Top...Cambridge, MA, 2008; 121–128. Blei, D.M., Ng, A.Y., Jordan, M.I.. Latent Dirichlet allocation. The Journal of Machine Learning Research 2003; 3:993

  3. New Methodologies for the Thermal Modelling of CubeSats

    OpenAIRE

    Reiss, Philip

    2012-01-01

    One of the main threats for the success of a CubeSat mission is the unbalanced distribution of thermal loads caused by internal and external heat sources. In order to design an appropriate thermal subsystem that can cope with these loads a detailed analysis is required. However, currently available thermal software is considered as being less convenient for the application with CubeSats, mainly due to the complexity of the modelling process. This paper examines thermal engineering issues for ...

  4. Modeling postpartum depression in rats: theoretic and methodological issues

    Science.gov (United States)

    Ming, LI; Shinn-Yi, CHOU

    2016-01-01

    The postpartum period is when a host of changes occur at molecular, cellular, physiological and behavioral levels to prepare female humans for the challenge of maternity. Alteration or prevention of these normal adaptions is thought to contribute to disruptions of emotion regulation, motivation and cognitive abilities that underlie postpartum mental disorders, such as postpartum depression. Despite the high incidence of this disorder, and the detrimental consequences for both mother and child, its etiology and related neurobiological mechanisms remain poorly understood, partially due to the lack of appropriate animal models. In recent decades, there have been a number of attempts to model postpartum depression disorder in rats. In the present review, we first describe clinical symptoms of postpartum depression and discuss known risk factors, including both genetic and environmental factors. Thereafter, we discuss various rat models that have been developed to capture various aspects of this disorder and knowledge gained from such attempts. In doing so, we focus on the theories behind each attempt and the methods used to achieve their goals. Finally, we point out several understudied areas in this field and make suggestions for future directions. PMID:27469254

  5. Exploring the possibility of modeling a genetic counseling guideline using agile methodology.

    Science.gov (United States)

    Choi, Jeeyae

    2013-01-01

    Increased demand of genetic counseling services heightened the necessity of a computerized genetic counseling decision support system. In order to develop an effective and efficient computerized system, modeling of genetic counseling guideline is an essential step. Throughout this pilot study, Agile methodology with United Modeling Language (UML) was utilized to model a guideline. 13 tasks and 14 associated elements were extracted. Successfully constructed conceptual class and activity diagrams revealed that Agile methodology with UML was a suitable tool to modeling a genetic counseling guideline.

  6. Methodologies for Wind Turbine and STATCOM Integration in Wind Power Plant Models for Harmonic Resonances Assessment

    DEFF Research Database (Denmark)

    Freijedo Fernandez, Francisco Daniel; Chaudhary, Sanjay Kumar; Guerrero, Josep M.

    2015-01-01

    This paper approaches modelling methodologies for integration of wind turbines and STATCOM in harmonic resonance studies. Firstly, an admittance equivalent model representing the harmonic signature of grid connected voltage source converters is provided. A simplified type IV wind turbine modelling......-domain. As an alternative, a power based averaged modelling is also proposed. Type IV wind turbine harmonic signature and STATCOM active harmonic mitigation are considered for the simulation case studies. Simulation results provide a good insight of the features and limitations of the proposed methodologies....

  7. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  8. Fuel cycle assessment: A compendium of models, methodologies, and approaches

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    The purpose of this document is to profile analytical tools and methods which could be used in a total fuel cycle analysis. The information in this document provides a significant step towards: (1) Characterizing the stages of the fuel cycle. (2) Identifying relevant impacts which can feasibly be evaluated quantitatively or qualitatively. (3) Identifying and reviewing other activities that have been conducted to perform a fuel cycle assessment or some component thereof. (4) Reviewing the successes/deficiencies and opportunities/constraints of previous activities. (5) Identifying methods and modeling techniques/tools that are available, tested and could be used for a fuel cycle assessment.

  9. Methodology for modeling the microbial contamination of air filters.

    Directory of Open Access Journals (Sweden)

    Yun Haeng Joe

    Full Text Available In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter.

  10. New temperature model of the Netherlands from new data and novel modelling methodology

    Science.gov (United States)

    Bonté, Damien; Struijk, Maartje; Békési, Eszter; Cloetingh, Sierd; van Wees, Jan-Diederik

    2017-04-01

    Deep geothermal energy has grown in interest in Western Europe in the last decades, for direct use but also, as the knowledge of the subsurface improves, for electricity generation. In the Netherlands, where the sector took off with the first system in 2005, geothermal energy is seen has a key player for a sustainable future. The knowledge of the temperature subsurface, together with the available flow from the reservoir, is an important factor that can determine the success of a geothermal energy project. To support the development of deep geothermal energy system in the Netherlands, we have made a first assessment of the subsurface temperature based on thermal data but also on geological elements (Bonté et al, 2012). An outcome of this work was ThermoGIS that uses the temperature model. This work is a revision of the model that is used in ThermoGIS. The improvement from the first model are multiple, we have been improving not only the dataset used for the calibration and structural model, but also the methodology trough an improved software (called b3t). The temperature dataset has been updated by integrating temperature on the newly accessible wells. The sedimentary description in the basin has been improved by using an updated and refined structural model and an improved lithological definition. A major improvement in from the methodology used to perform the modelling, with b3t the calibration is made not only using the lithospheric parameters but also using the thermal conductivity of the sediments. The result is a much more accurate definition of the parameters for the model and a perfected handling of the calibration process. The result obtain is a precise and improved temperature model of the Netherlands. The thermal conductivity variation in the sediments associated with geometry of the layers is an important factor of temperature variations and the influence of the Zechtein salt in the north of the country is important. In addition, the radiogenic heat

  11. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    Energy Technology Data Exchange (ETDEWEB)

    Knezevic, J.; Odoom, E.R

    2001-07-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets.

  12. Geographic modelling of jaw fracture rates in Australia: a methodological model for healthcare planning.

    Science.gov (United States)

    Kruger, Estie; Heitz-Mayfield, Lisa J A; Perera, Irosha; Tennant, Marc

    2010-06-01

    While Australians are one of the healthiest populations in the world, inequalities in access to health care and health outcomes exist for Indigenous Australians and Australians living in rural or urban areas of the country. Hence, the purpose of this study was to develop an innovative methodological approach for predicting the incidence rates of jaw fractures and estimating the demand for oral health services within Australia. Population data were obtained from the Australian Bureau of Statistics and was divided across Australia by statistical local area and related to a validated remoteness index. Every episode of discharge from all hospitals in Western Australia for the financial years 1999/2000 to 2004/2005 indicating a jaw fracture as the principle oral condition, as classified by the International Classification of Disease (ICD-10AM), was the inclusion criterion for the study. Hospitalization data were obtained from the Western Australian Hospital Morbidity Data System. The model estimated almost 10 times higher jaw fracture rates for Indigenous populations than their non-Indigenous counterparts. Moreover, incidence of jaw fractures was higher among Indigenous people living in rural and remote areas compared with their urban and semi-urban counterparts. In contrast, in the non-Indigenous population, higher rates of jaw fractures were estimated for urban and semi-urban inhabitants compared with their rural and remote counterparts. This geographic modelling technique could be improved by methodological refinements and further research. It will be useful in developing strategies for health management and reducing the burden of jaw fractures and the cost of treatment within Australia. This model will also have direct implications for strategic planning for prevention and management policies in Australia aimed at reducing the inequalities gap both in terms of geography as well as Aboriginality.

  13. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  14. Two-Year versus One-Year Head Start Program Impact: Addressing Selection Bias by Comparing Regression Modeling with Propensity Score Analysis

    Science.gov (United States)

    Leow, Christine; Wen, Xiaoli; Korfmacher, Jon

    2015-01-01

    This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…

  15. Sensitivity Analysis of the Army Training and Testing Area Carrying Capacity (ATTACC) Model to User-specified Starting Parameters

    National Research Council Canada - National Science Library

    Anderson, Alan

    1999-01-01

    ...) program is a methodology for estimating training and testing land carrying capacity. The methodology is used to determine land rehabilitation and maintenance costs associated with land-based training and other uses...

  16. Towards A Model-Based Prognostics Methodology For Electrolytic Capacitors: A Case Study Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter...

  17. Key-Aspects of Scientific Modeling Exemplified by School Science Models: Some Units for Teaching Contextualized Scientific Methodology

    Science.gov (United States)

    Develaki, Maria

    2016-01-01

    Models and modeling are core elements of scientific methods and consequently also are of key importance for the conception and teaching of scientific methodology. The epistemology of models and its transfer and adaption to nature of science education are not, however, simple themes. We present some conceptual units in which school science models…

  18. Integration of process design and controller design for chemical processes using model-based methodology

    DEFF Research Database (Denmark)

    Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan; Gani, Rafiqul

    2010-01-01

    In this paper, a novel systematic model-based methodology for performing integrated process design and controller design (IPDC) for chemical processes is presented. The methodology uses a decomposition method to solve the IPDC typically formulated as a mathematical programming (optimization with ...... that satisfy design, control and cost criteria. The advantage of the proposed methodology is that it is systematic, makes use of thermodynamic-process knowledge and provides valuable insights to the solution of IPDC problems in chemical engineering practice.......In this paper, a novel systematic model-based methodology for performing integrated process design and controller design (IPDC) for chemical processes is presented. The methodology uses a decomposition method to solve the IPDC typically formulated as a mathematical programming (optimization...... with constraints) problem. Accordingly the optimization problem is decomposed into four sub-problems: (i) pre-analysis, (ii) design analysis, (iii) controller design analysis, and (iv) final selection and verification, which are relatively easier to solve. The methodology makes use of thermodynamic-process...

  19. Modeling the Decay in AN Hbim Starting from 3d Point Clouds. a Followed Approach for Cultural Heritage Knowledge

    Science.gov (United States)

    Chiabrando, F.; Lo Turco, M.; Rinaudo, F.

    2017-08-01

    The recent trends in architectural data management imply the scientific and professional collaborations of several disciplines involved in the design, restoration and maintenance. It seems an achieved concept that, in the next future, all the information connected to new interventions or conservation activities on historical buildings will be managed by using a BIM platform. Nowadays the actual range or image based metric survey techniques (mainly produced by using Terrestrial Laser Scanner or photogrammetric platform today more based on projective geometry) allow to generate 3D point clouds, 3D models, orthophotos and other outputs with assessed accuracy. The subsequent conversion of 3D information into parametric components, especially in an historical environment, is not easy and has a lot of open issues. According to the actual BIM commercial software and to the embedded tools or plugin, the paper deals with the methodology followed for the realization of two parametric 3D models (Palazzo Sarmatoris and Smistamento RoundHouse, two historical building in the north-west part of Italy). The paper describes the proposed workflow according to the employed plug-in for automatic reconstruction and to the solution adopted for the well-known problems connected to the modeling phase such as the vaults realization or the 3D irregular surfaces modeling. Finally, the studied strategy for mapping the decay in a BIM environment and the connected results with the conclusions and future perspectives are critically discussed.

  20. MODELING THE DECAY IN AN HBIM STARTING FROM 3D POINT CLOUDS. A FOLLOWED APPROACH FOR CULTURAL HERITAGE KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    F. Chiabrando

    2017-08-01

    Full Text Available The recent trends in architectural data management imply the scientific and professional collaborations of several disciplines involved in the design, restoration and maintenance. It seems an achieved concept that, in the next future, all the information connected to new interventions or conservation activities on historical buildings will be managed by using a BIM platform. Nowadays the actual range or image based metric survey techniques (mainly produced by using Terrestrial Laser Scanner or photogrammetric platform today more based on projective geometry allow to generate 3D point clouds, 3D models, orthophotos and other outputs with assessed accuracy. The subsequent conversion of 3D information into parametric components, especially in an historical environment, is not easy and has a lot of open issues. According to the actual BIM commercial software and to the embedded tools or plugin, the paper deals with the methodology followed for the realization of two parametric 3D models (Palazzo Sarmatoris and Smistamento RoundHouse, two historical building in the north-west part of Italy. The paper describes the proposed workflow according to the employed plug-in for automatic reconstruction and to the solution adopted for the well-known problems connected to the modeling phase such as the vaults realization or the 3D irregular surfaces modeling. Finally, the studied strategy for mapping the decay in a BIM environment and the connected results with the conclusions and future perspectives are critically discussed.

  1. Novel Methodology for Functional Modeling and Simulation of Wireless Embedded Systems

    Directory of Open Access Journals (Sweden)

    Sosa Morales Emma

    2008-01-01

    Full Text Available Abstract A novel methodology is presented for the modeling and the simulation of wireless embedded systems. Tight interaction between the analog and the digital functionality makes the design and verification of such systems a real challenge. The applied methodology brings together the functional models of the baseband algorithms written in C language with the circuit descriptions at behavioral level in Verilog or Verilog-AMS for the system simulations in a single kernel environment. The physical layer of an ultrawideband system has been successfully modeled and simulated. The results confirm that this methodology provides a standardized framework in order to efficiently and accurately simulate complex mixed signal applications for embedded systems.

  2. Getting Started: An Empirically Derived Logic Model for Age-Friendly Community Initiatives in the Early Planning Phase.

    Science.gov (United States)

    Greenfield, Emily A

    2018-02-16

    Age-friendly community initiatives (AFCIs) foster efforts across stakeholders to make localities more supportive and inclusive of older adults, and potentially better for residents of all ages. This study drew on in-depth interviews with leaders of nine newly forming AFCIs in northern New Jersey to develop an empirically based logic model for the initiatives in the early planning phase. The results obtained from a conventional content analysis indicated three main activities in the early planning phase: assessing the community; meeting; and communicating with stakeholders; and facilitating communitywide communications. These activities worked toward two outputs: increased understanding of aging in the community and more engaged stakeholders in aging. Participants described leveraging the contributions of lead staff, consultants, elected officials, organizational partners, volunteers, interns, funders, and other AFCIs to engage in their focal activities. Based on these findings, a logic model for AFCIs in the early planning phase is presented. AFCI leaders can draw on this model to evaluate AFCI processes and outcomes in their formative stages, as well as to strategically plan for the start of an AFCI within a given locality. Findings also suggest important directions for future research on the development of AFCIs and the community changes that they seek to influence.

  3. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    Science.gov (United States)

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  4. Summary of the Supplemental Model Reports Supporting the Disposal Criticality Analysis Methodology Topical Report

    International Nuclear Information System (INIS)

    Brownson, D. A.

    2002-01-01

    The Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) has committed to a series of model reports documenting the methodology to be utilized in the Disposal Criticality Analysis Methodology Topical Report (YMP 2000). These model reports detail and provide validation of the methodology to be utilized for criticality analyses related to: (1) Waste form/waste package degradation; (2) Waste package isotopic inventory; (3) Criticality potential of degraded waste form/waste package configurations (effective neutron multiplication factor); (4) Probability of criticality (for each potential critical configuration as well as total event); and (5) Criticality consequences. This purpose of this summary report is to provide a status of the model reports and a schedule for their completion. This report also provides information relative to the model report content and validation. The model reports and their revisions are being generated as a result of: (1) Commitments made in the Disposal Criticality Analysis Methodology Topical Report (YMP 2000); (2) Open Items from the Safety Evaluation Report (Reamer 2000); (3) Key Technical Issue agreements made during DOE/U.S. Nuclear Regulatory Commission (NRC) Technical Exchange Meeting (Reamer and Williams 2000); and (4) NRC requests for additional information (Schlueter 2002)

  5. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

    DEFF Research Database (Denmark)

    Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson

    2012-01-01

    Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...... lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...

  6. The Development of Marine Accidents Human Reliability Assessment Approach: HEART Methodology and MOP Model

    Directory of Open Access Journals (Sweden)

    Ludfi Pratiwi Bowo

    2017-06-01

    Full Text Available Humans are one of the important factors in the assessment of accidents, particularly marine accidents. Hence, studies are conducted to assess the contribution of human factors in accidents. There are two generations of Human Reliability Assessment (HRA that have been developed. Those methodologies are classified by the differences of viewpoints of problem-solving, as the first generation and second generation. The accident analysis can be determined using three techniques of analysis; sequential techniques, epidemiological techniques and systemic techniques, where the marine accidents are included in the epidemiological technique. This study compares the Human Error Assessment and Reduction Technique (HEART methodology and the 4M Overturned Pyramid (MOP model, which are applied to assess marine accidents. Furthermore, the MOP model can effectively describe the relationships of other factors which affect the accidents; whereas, the HEART methodology is only focused on human factors.

  7. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Heo, Gyun Young; Kang, Hyun Gook; Son, Han Seong

    2014-01-01

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility

  8. Model and measurement methodology for the analysis of consumer choice of foods products

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1983-01-01

    textabstractThe consumer can be conceived as an imperfect problem solver. Consumer behavior with respect to food products is purposive, but the consumer is bounded by limitations of information, cognitive skills, memory and time. From this starting point, this paper develops a model of the

  9. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    Science.gov (United States)

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  10. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  11. A Methodological Review of Structural Equation Modelling in Higher Education Research

    Science.gov (United States)

    Green, Teegan

    2016-01-01

    Despite increases in the number of articles published in higher education journals using structural equation modelling (SEM), research addressing their statistical sufficiency, methodological appropriateness and quantitative rigour is sparse. In response, this article provides a census of all covariance-based SEM articles published up until 2013…

  12. Methodology and Applications in Non-linear Model-based Geostatistics

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund

    that are approximately Gaussian. Parameter estimation and prediction for the transformed Gaussian model is studied. In some cases a transformation cannot possibly render the data Gaussian. A methodology for analysing such data was introduced by Diggle, Tawn and Moyeed (1998): The generalised linear spatial model....... Conditioned by an underlying and unobserved Gaussian process the observations at the measured locations follow a generalised linear model. Concerning inference Markov chain Monte Carlo methods are used. The study of these models is the main topic of the thesis. Construction of priors, and the use of flat...... contains functions for inference in generalised linear spatial models.    ...

  13. An Application of the PMI Model at the Project Level: Evaluation of the ESEA Title IV C Fresh Start Minischool Project.

    Science.gov (United States)

    Harrison, Patricia C.

    The Planning, Monitoring, and Implementation Model (PMI) was developed to provide a model for systematic evaluation of educational programs to determine their effectiveness in achieving goals and objectives. This paper demonstrates the applicability of the PMI model at the project level. Fresh Start Minischool at Ballou High School (District of…

  14. Exploring the Integration of COSYSMO with a Model-Based Systems Engineering Methodology in Early Trade Space Analytics and Decisions

    Science.gov (United States)

    2016-06-01

    methodology , analysis, and recommendations that appear in Chapters 3-5. 2.1 Major Defense Acquisition Programs: The Big Ticket Items Major defense...Chapter Summary: Methodology This chapter identifies and discusses the specific SysML models, data requirements, and model entities that relate to the...The background, methodology , and results presented in this thesis highlighted themes of compliance, use of system models, data utility, and an

  15. Reducing maladaptive behaviors in preschool-aged children with autism spectrum disorder using the early start denver model.

    Science.gov (United States)

    Fulton, Elizabeth; Eapen, Valsamma; Crnčec, Rudi; Walter, Amelia; Rogers, Sally

    2014-01-01

    The presence of maladaptive behaviors in young people with autism spectrum disorder (ASD) can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM) treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an observation-based measure on three occasions during the intervention: on entry, 12 weeks post-entry, and on exit (post-intervention) over an average treatment duration of 11.8 months. Significant reductions were found in children's maladaptive behaviors over the course of the intervention, with 68% of children showing a treatment response by 12 weeks and 79% on exit. This change was accompanied by improvement in children's overall developmental level as assessed by the Mullen scales of early learning, but not by significant changes on the Vineland Adaptive Behavior Scales-II or Social Communication Questionnaire. Replication with a larger sample, control conditions, and additional measures of maladaptive behavior is necessary in order to determine the specific factors underlying these improvements; however, the findings of the present study suggest that the ESDM program may be effective in improving not only core developmental domains, but also decreasing maladaptive behaviors in preschool-aged children with ASD.

  16. Reducing maladaptive behaviors in preschool-aged children with Autism Spectrum Disorder using the Early Start Denver Model

    Directory of Open Access Journals (Sweden)

    Elizabeth eFulton

    2014-05-01

    Full Text Available The presence of maladaptive behaviors in young people with Autism Spectrum Disorder (ASD can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an observation based measure on three occasions during the intervention: on entry, 12 weeks post-entry, and on exit (post-intervention over an average treatment duration of 11.8 months. Significant reductions were found in children’s maladaptive behaviors over the course of the intervention, with 68% of children showing a treatment response by 12 weeks and 79% on exit. This change was accompanied by improvement in children’s overall developmental level as assessed by the Mullen Scales of Early Learning, but not by significant changes on the Vineland Adaptive Behavior Scales-II or Social Communication Questionnaire. Replication with a larger sample, control conditions and additional measures of maladaptive behavior is necessary in order to determine the specific factors underlying these improvements; however, the findings of the present study suggest that the ESDM program may be effective in improving not only core developmental domains, but also decreasing maladaptive behaviors in preschool-aged children.

  17. A Methodological Note for the Development of Integrated Aquaculture Production Models

    Directory of Open Access Journals (Sweden)

    Stella Tsani

    2018-01-01

    Full Text Available Aquaculture production can yield significant economic, social, and environmental effects. These exceed the financial costs and benefits aquaculture producers are faced with. We propose a methodology for the development of integrated production models that allow for the inclusion of the socio-economic and environmental effects of aquaculture into the production management. The methodology develops on a Social Cost-Benefit Analysis context and it includes three parts: (i environmental, that captures the interactions of aquaculture with the environment, (ii economic, that makes provision for the incorporation of economic determinants in the production models and (iii social, that introduces the social preferences to the production and management process. Alternatives to address data availability issues are also discussed. The methodology extends the assessment of the costs and benefits of aquaculture beyond pure financial metrics and beyond the quantification of private costs and benefits. It can also support the development of integrated models of aquaculture production that take into consideration both the private and the social costs and benefits associated with externalities and effects not appropriately captured by market mechanisms. The methodology can support aquaculture management and policies targeting sustainable and efficient aquaculture production and financing from an economic, financial, social, and environmental point of view.

  18. Simplified life cycle assessment models: methodological framework and applications to energy pathways

    International Nuclear Information System (INIS)

    Padey, Pierryves

    2013-01-01

    The energy transition debate is a key issue for today and the coming years. One of the challenges is to limit the environmental impacts of electricity production. Decision support tools, sufficiently accurate, simple to use, accounting for environmental aspects and favoring future energetic choices, must be implemented. However, the environmental assessment of the energy pathways is complex, and it means considering a two levels characterization. The 'energy pathway' is the first level and corresponds to its environmental distribution, to compare overall pathways. The 'system pathway' is the 2. level and compares environmental impacts of systems within each pathway. We have devised a generic methodology covering both necessary characterization levels by estimating the energy pathways environmental profiles while allowing a simple comparison of its systems environmental impacts. This methodology is based on the definition of a parameterized Life Cycle Assessment model and considers, through a Global Sensitivity Analysis, the environmental impacts of a large sample of systems representative of an energy pathway. As a second step, this methodology defines simplified models based on few key parameters identified as inducing the largest variability in the energy pathway environmental impacts. These models assess in a simple way the systems environmental impacts, avoiding any complex LCAs. This reduction methodology has been applied to the onshore wind power energy pathway in Europe and the photovoltaic energy pathway in France. (author)

  19. Methodology Development for SiC Sensor Signal Modelling in the Nuclear Reactor Radiation Environments

    International Nuclear Information System (INIS)

    Cetnar, J.; Krolikowski, I.P.

    2013-06-01

    This paper deals with SiC detector simulation methodology for signal formation by neutrons and induced secondary radiation as well as its inverse interpretation. The primary goal is to achieve the SiC capability of simultaneous spectroscopic measurements of neutrons and gamma-rays for which an appropriate methodology of the detector signal modelling and its interpretation must be adopted. The process of detector simulation is divided into two basically separate but actually interconnected sections. The first one is the forward simulation of detector signal formation in the field of the primary neutron and secondary radiations, whereas the second one is the inverse problem of finding a representation of the primary radiation, based on the measured detector signals. The applied methodology under development is based on the Monte Carlo description of radiation transport and analysis of the reactor physics. The methodology of SiC detector signal interpretation will be based on the existing experience in neutron metrology developed in the past for various neutron and gamma-ray detection systems. Since the novel sensors based on SiC are characterised by a new structure, yet to be finally designed, the methodology for particle spectroscopic fluence measurement must be developed while giving a productive feed back to the designing process of SiC sensor, in order to arrive at the best possible design. (authors)

  20. Comparing results from multiple imputation and dynamic marginal structural models for estimating when to start antiretroviral therapy.

    Science.gov (United States)

    Shepherd, Bryan E; Liu, Qi; Mercaldo, Nathaniel; Jenkins, Cathy A; Lau, Bryan; Cole, Stephen R; Saag, Michael S; Sterling, Timothy R

    2016-10-30

    Optimal timing of initiating antiretroviral therapy has been a controversial topic in HIV research. Two highly publicized studies applied different analytical approaches, a dynamic marginal structural model and a multiple imputation method, to different observational databases and came up with different conclusions. Discrepancies between the two studies' results could be due to differences between patient populations, fundamental differences between statistical methods, or differences between implementation details. For example, the two studies adjusted for different covariates, compared different thresholds, and had different criteria for qualifying measurements. If both analytical approaches were applied to the same cohort holding technical details constant, would their results be similar? In this study, we applied both statistical approaches using observational data from 12,708 HIV-infected persons throughout the USA. We held technical details constant between the two methods and then repeated analyses varying technical details to understand what impact they had on findings. We also present results applying both approaches to simulated data. Results were similar, although not identical, when technical details were held constant between the two statistical methods. Confidence intervals for the dynamic marginal structural model tended to be wider than those from the imputation approach, although this may have been due in part to additional external data used in the imputation analysis. We also consider differences in the estimands, required data, and assumptions of the two statistical methods. Our study provides insights into assessing optimal dynamic treatment regimes in the context of starting antiretroviral therapy and in more general settings. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Early diagnosis and Early Start Denver Model intervention in autism spectrum disorders delivered in an Italian Public Health System service.

    Science.gov (United States)

    Devescovi, Raffaella; Monasta, Lorenzo; Mancini, Alice; Bin, Maura; Vellante, Valerio; Carrozzi, Marco; Colombi, Costanza

    2016-01-01

    Early diagnosis combined with an early intervention program, such as the Early Start Denver Model (ESDM), can positively influence the early natural history of autism spectrum disorders. This study evaluated the effectiveness of an early ESDM-inspired intervention, in a small group of toddlers, delivered at low intensity by the Italian Public Health System. Twenty-one toddlers at risk for autism spectrum disorders, aged 20-36 months, received 3 hours/wk of one-to-one ESDM-inspired intervention by trained therapists, combined with parents' and teachers' active engagement in ecological implementation of treatment. The mean duration of treatment was 15 months. Cognitive and communication skills, as well as severity of autism symptoms, were assessed by using standardized measures at pre-intervention (Time 0 [T0]; mean age =27 months) and post-intervention (Time 1 [T1]; mean age =42 months). Children made statistically significant improvements in the language and cognitive domains, as demonstrated by a series of nonparametric Wilcoxon tests for paired data. Regarding severity of autism symptoms, younger age at diagnosis was positively associated with greater improvement at post-assessment. Our results are consistent with the literature that underlines the importance of early diagnosis and early intervention, since prompt diagnosis can reduce the severity of autism symptoms and improve cognitive and language skills in younger children. Particularly in toddlers, it seems that an intervention model based on the ESDM principles, involving the active engagement of parents and nursery school teachers, may be effective even when the individual treatment is delivered at low intensity. Furthermore, our study supports the adaptation and the positive impact of the ESDM entirely sustained by the Italian Public Health System.

  2. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    Science.gov (United States)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  3. A coupled groundwater-flow-modelling and vulnerability-mapping methodology for karstic terrain management

    Science.gov (United States)

    Kavouri, Konstantina P.; Karatzas, George P.; Plagnes, Valérie

    2017-08-01

    A coupled groundwater-flow-modelling and vulnerability-mapping methodology for the management of karst aquifers with spatial variability is developed. The methodology takes into consideration the duality of flow and recharge in karst and introduces a simple method to integrate the effect of temporal storage in the unsaturated zone. In order to investigate the applicability of the developed methodology, simulation results are validated against available field measurement data. The criteria maps from the PaPRIKa vulnerability-mapping method are used to document the groundwater flow model. The FEFLOW model is employed for the simulation of the saturated zone of Palaikastro-Chochlakies karst aquifer, in the island of Crete, Greece, for the hydrological years 2010-2012. The simulated water table reproduces typical karst characteristics, such as steep slopes and preferred drain axes, and is in good agreement with field observations. Selected calculated error indicators—Nash-Sutcliffe efficiency (NSE), root mean squared error (RMSE) and model efficiency (E')—are within acceptable value ranges. Results indicate that different storage processes take place in different parts of the aquifer. The north-central part seems to be more sensitive to diffuse recharge, while the southern part is affected primarily by precipitation events. Sensitivity analysis is performed on the parameters of hydraulic conductivity and specific yield. The methodology is used to estimate the feasibility of artificial aquifer recharge (AAR) at the study area. Based on the developed methodology, guidelines were provided for the selection of the appropriate AAR scenario that has positive impact on the water table.

  4. Modeling and design methodology for metal-insulator-metal plasmonic Bragg reflectors.

    Science.gov (United States)

    Hosseini, Amir; Nejati, Hamid; Massoud, Yehia

    2008-02-04

    In this paper, we present a modeling and design methodology based on characteristic impedance for plasmonic waveguides with Metal-Insulator-Metal (MIM) configuration. Finite-Difference Time-Domain (FDTD) simulations indicate that the impedance matching results in negligible reflection at discontinuities in MIM heterostructures. Leveraging the MIM impedance model, we present a general Transfer Matrix Method model for MIM Bragg reflectors and validate our model against FDTD simulations. We show that both periodically stacked dielectric layers of different thickness or different material can achieve the same performance in terms of propagation loss and minimum transmission at the central bandgap frequency in the case of a finite number of periods.

  5. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, T. B.; Ketzel, Matthias; Skov, H.

    2016-01-01

    Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...... of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to successfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach...

  6. Uncertainty in urban stormwater quality modelling: the influence of likelihood measure formulation in the GLUE methodology.

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gapare

    2009-12-15

    In the last years, the attention on integrated analysis of sewer networks, wastewater treatment plants and receiving waters has been growing. However, the common lack of data in the urban water-quality field and the incomplete knowledge regarding the interpretation of the main phenomena taking part in integrated urban water systems draw attention to the necessity of evaluating the reliability of model results. Uncertainty analysis can provide useful hints and information regarding the best model approach to be used by assessing its degrees of significance and reliability. Few studies deal with uncertainty assessment in the integrated urban-drainage field. In order to fill this gap, there has been a general trend towards transferring the knowledge and the methodologies from other fields. In this respect, the Generalised Likelihood Uncertainty Evaluation (GLUE) methodology, which is widely applied in the field of hydrology, can be a possible candidate for providing a solution to the above problem. However, the methodology relies on several user-defined hypotheses in the selection of a specific formulation of the likelihood measure. This paper presents a survey aimed at evaluating the influence of the likelihood measure formulation in the assessment of uncertainty in integrated urban-drainage modelling. To accomplish this objective, a home-made integrated urban-drainage model was applied to the Savena case study (Bologna, IT). In particular, the integrated urban-drainage model uncertainty was evaluated employing different likelihood measures. The results demonstrate that the subjective selection of the likelihood measure greatly affects the GLUE uncertainty analysis.

  7. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    Science.gov (United States)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  8. A methodology for the design of experiments in computational intelligence with multiple regression models.

    Science.gov (United States)

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  9. A methodology for the design of experiments in computational intelligence with multiple regression models

    Directory of Open Access Journals (Sweden)

    Carlos Fernandez-Lozano

    2016-12-01

    Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  10. CALS and the Product State Model - Methodology and Supporting Schools and Paradigms

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1998-01-01

    This paper address the preliminary considerations in a research project, initiated February 1997, regarding Continuous Acquisition and Life-cycle Support (CALS) which is a part of the activities in CALS Center Denmark. The CALS concept is presented focusing on the Product State Model (PSM). The PSM...... incorporates relevant information about each stage of the production process.The paper will describe the research object, the model object and discuss a part of the methodology in developing a Product State Model. The project is primarily technological, however, organisational and human aspects...

  11. Reference Controls Methodologies for Large Structural Models at Kennedy Space Center

    Science.gov (United States)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss the methodology behind Model Class Definition--the use of separate model assemblies of the same product, at different fidelities, driven by the same skeleton files. These fidelities can be used to convey only the detail that the end user needs, by allowing the end user to pick and choose between model classes for assembly creation. The usefulness of MPA for the Windchill Release Process will be demonstrated, along with its usefulness in minimizing CPU resources during design. The presentation will include an overview the ML product, how to structure complex products in a flexible modular architecture, and how to maximize system resources.

  12. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  13. 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Kacprzyk, Janusz; Ören, Tuncer; Filipe, Joaquim

    2016-01-01

    The present book includes a set of selected extended papers from the 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2015), held in Colmar, France, from 21 to 23 July 2015. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2015 received 102 submissions, from 36 countries, in all continents. After a double blind paper review performed by the Program Committee, 19% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2015. Commitment to high quality standards is a major concern of SIMULTECH t...

  14. 2014 International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Ören, Tuncer; Kacprzyk, Janusz; Filipe, Joaquim

    2015-01-01

    The present book includes a set of selected extended papers from the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014), held in Vienna, Austria, from 28 to 30 August 2014. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2014 received 167 submissions, from 45 countries, in all continents. After a double blind paper review performed by the Program Committee, 23% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2014. Commitment to high quality standards is a major concern of SIMULTEC...

  15. The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology: Capabilities and Applications

    Science.gov (United States)

    Evers, Ken H.; Bachert, Robert F.

    1987-01-01

    The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.

  16. Methodologies, models and parameters for environmental, impact assessment of hazardous and radioactive contaminants

    International Nuclear Information System (INIS)

    Aguero, A.; Cancio, D.; Garcia-Olivares, A.; Romero, L.; Pinedo, P.; Robles, B.; Rodriguez, J.; Simon, I.; Suanez, A.

    2003-01-01

    An Environmental Impact Assessment Methodology to assess the impact arising from contaminants present in hazardous and radioactive wastes has been developed. Taking into account of the background information on legislation, waste categories and contaminants inventory, and disposal, recycling and waste treatment options, an Environmental Impact Assessment Methodology (MEIA) is proposed. This is applicable to (i) several types of solid wastes (hazardous, radioactive and mixed wastes; (ii) several management options (recycling and temporal and final storage (in shallow and deep disposal)), (iii) several levels of data availability. Conceptual and mathematical models and software tools needed for the application of the MEIA have been developed. Bearing in mind that this is a complex process, both the models and tools have to be developed following an iterative approaches, involving refinement of the models and go as to better correspond the described system. The selection of suitable parameters for the models is based on information derived from field and laboratory measurements and experiments, nd then applying a data elicitation protocol.. It is shown an application performed for a hypothetical shallow radioactive waste disposal facility (test case), with all the steps of the MEIA applied sequentially. In addition, the methodology is applied to an actual cases of waste management for hazardous wastes from the coal fuel cycle, demonstrating several possibilities for application of the MEIA from a practical perspective. The experience obtained in the development of the work shows that the use of the MEIA for the assessment of management options for hazardous and radioactive wastes gives important advantages, simplifying the execution of the assessment, its tracability and the dissemination of methodology assessment results to to other interested parties. (Author)

  17. A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components

    Science.gov (United States)

    2016-01-01

    capability to estimate the historic impact of changes in economic conditions on the flows of labor into, between, and out of the Air Force active...C O R P O R A T I O N Research Report A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components...or considered about the effect that those policies might have on personnel flows into and out of other components. The degree to which this is

  18. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  19. Decision modelling of non-pharmacological interventions for individuals with dementia: a systematic review of methodologies

    DEFF Research Database (Denmark)

    Sopina, Liza; Sørensen, Jan

    2018-01-01

    Abstract Objectives: The main objective of this study is to conduct a systematic review to identify and discuss methodological issues surrounding decision modelling for economic evaluation of non-pharmacological interventions (NPIs) in dementia. Methods: A systematic search was conducted for publ......Abstract Objectives: The main objective of this study is to conduct a systematic review to identify and discuss methodological issues surrounding decision modelling for economic evaluation of non-pharmacological interventions (NPIs) in dementia. Methods: A systematic search was conducted...... of challenging methodological issues were identified, including the use of MMSE-score as the main outcome measure, limited number of strategies compared, restricted time horizons, and limited or dated data on dementia onset, progression and mortality. Only one of the three tertiary prevention studies explicitly...... impact of NPIs for dementia in future decision models. It is also important to account for the effects of pharmacological therapies alongside the NPIs in economic evaluations. Access to more localised and up-to-date data on dementia onset, progression and mortality is a priority for accurate prediction....

  20. Modeling the Capacity and Emissions Impacts of Reduced Electricity Demand. Part 1. Methodology and Preliminary Results

    Energy Technology Data Exchange (ETDEWEB)

    Coughlin, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Shen, Hongxia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Chan, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; McDevitt, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Sturges, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division

    2013-02-07

    Policies aimed at energy conservation and efficiency have broad environmental and economic impacts. Even if these impacts are relatively small, they may be significant compared to the cost of implementing the policy. Methodologies that quantify the marginal impacts of reduced demand for energy have an important role to play in developing accurate measures of both the benefits and costs of a given policy choice. This report presents a methodology for estimating the impacts of reduced demand for electricity on the electric power sector as a whole. The approach uses the National Energy Modeling System (NEMS), a mid-range energy forecast model developed and maintained by the U.S. Department of Energy, Energy Information Administration (EIA)(DOE EIA 2013). The report is organized as follows: In the rest of this section the traditional NEMS-BT approach is reviewed and an outline of the new reduced form NEMS methodology is presented. Section 2 provides an overview of how the NEMS model works, and describes the set of NEMS-BT runs that are used as input to the reduced form approach. Section 3 presents our NEMS-BT simulation results and post-processing methods. In Section 4 we show how the NEMS-BT output can be generalized to apply to a broader set of end-uses. In Section 5 we disuss the application of this approach to policy analysis, and summarize some of the issues that will be further investigated in Part 2 of this study.

  1. Energy Demand Modeling Methodology of Key State Transitions of Turning Processes

    Directory of Open Access Journals (Sweden)

    Shun Jia

    2017-04-01

    Full Text Available Energy demand modeling of machining processes is the foundation of energy optimization. Energy demand of machining state transition is integral to the energy requirements of the machining process. However, research focus on energy modeling of state transition is scarce. To fill this gap, an energy demand modeling methodology of key state transitions of the turning process is proposed. The establishment of an energy demand model of state transition could improve the accuracy of the energy model of the machining process, which also provides an accurate model and reliable data for energy optimization of the machining process. Finally, case studies were conducted on a CK6153i CNC lathe, the results demonstrating that predictive accuracy with the proposed method is generally above 90% for the state transition cases.

  2. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    Science.gov (United States)

    Morell, Jonathan A

    2017-09-18

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017. Published by Elsevier Ltd.

  3. The Starting Early Starting Smart Story.

    Science.gov (United States)

    Casey Family Programs, Seattle, WA.

    Starting Early Starting Smart (SESS) is an early childhood public/private initiative designed to identify new, empirical knowledge about the effectiveness of integrating substance abuse prevention, addictions treatment, and mental health services with primary health care and childcare service settings (e.g., Head Start, day care, preschool) to…

  4. Methodology for transition probabilities determination in a Markov decision processes model for quality-accuracy management

    Directory of Open Access Journals (Sweden)

    Mitkovska-Trendova Katerina

    2014-01-01

    Full Text Available The main goal of the presented research is to define a methodology for determination of the transition probabilities in a Markov Decision Process on the example of optimization of the quality accuracy through optimization of its main measure (percent of scrap in a Performance Measurement System. This research had two main driving forces. First, today's urge for introduction of more robust, mathematically founded methods/tools in different enterprise areas, including PMSs. Second, since Markov Decision Processes are chosen as such tool, certain shortcomings of this approach had to be handled. Exactly the calculation of the transition probabilities is one of the weak points of the Markov Decision Processes. The proposed methodology for calculation of the transition probabilities is based on utilization of recorded historical data and they are calculated for each possible transition from a state after one run to a state after the following run of the influential factor (e.g. machine. The methodology encompasses several steps that include: collecting different data connected to the percent of scrap and their processing according to the needs of the methodology, determination of the limits of the states for every influential factor, classification of the data from real batches according to the determined states and calculation of the transition probabilities from one state to another state for every action. However, the implementation of the Markov Decision Process model with the proposed methodology for calculation of the transition probabilities resulted in optimal policy that showed significant differences in the percent of scrap, compared to the real situation when the optimization of the percent of scrap was done heuristically (5.2107% versus 13.5928%.

  5. Computational fluid dynamics analysis of an innovative start-up method of high temperature fuel cells using dynamic 3d model

    Directory of Open Access Journals (Sweden)

    Kupecki Jakub

    2017-03-01

    Full Text Available The article presents a numerical analysis of an innovative method for starting systems based on high temperature fuel cells. The possibility of preheating the fuel cell stacks from the cold state to the nominal working conditions encounters several limitations related to heat transfer and stability of materials. The lack of rapid and safe start-up methods limits the proliferation of MCFCs and SOFCs. For that reason, an innovative method was developed and verified using the numerical analysis presented in the paper. A dynamic 3D model was developed that enables thermo-fluidic investigations and determination of measures for shortening the preheating time of the high temperature fuel cell stacks. The model was implemented in ANSYS Fluent computational fluid dynamic (CFD software and was used for verification of the proposed start-up method. The SOFC was chosen as a reference fuel cell technology for the study. Results obtained from the study are presented and discussed.

  6. Concepts and methodologies for modeling and simulation a tribute to Tuncer Oren

    CERN Document Server

    Yilmaz, Levent

    2015-01-01

    This comprehensive text/reference presents cutting-edge advances in the theory and methodology of modeling and simulation (M&S), and reveals how this work has been influenced by the fundamental contributions of Professor Tuncer Ören to this field. Exploring the synergies among the domains of M&S and systems engineering (SE), the book describes how M&S and SE can help to address the complex problems identified as "Grand Challenges" more effectively under a model-driven and simulation-directed systems engineering framework. Topics and features: examines frameworks for the development of advan

  7. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models: Appendices

    Science.gov (United States)

    Coppolino, Robert N.

    2018-01-01

    Verification and validation (V&V) is a highly challenging undertaking for SLS structural dynamics models due to the magnitude and complexity of SLS subassemblies and subassemblies. Responses to challenges associated with V&V of Space Launch System (SLS) structural dynamics models are presented in Volume I of this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA). (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976). (3) Mode Consolidation (MC). Finally, (4) Experimental Mode Verification (EMV). This document contains the appendices to Volume I.

  8. A “Model to Model” Collaborative Perception Methodology for Distributed Design

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-06-01

    Full Text Available To solve the problem of collaborative engineering changes of models distributed in heterogeneous design platforms, a “model to model” perception methodology is proposed in this paper. A self-management collaborative architecture is presented by peer to peer architecture and multiagent system. The network addresses correlation between heterogeneous platforms is built up by the perception router ontology. In the same way, the correlation between design models is described by the feature relation ontology. The design changes are encapsulated by the model modification ontology. Along with the ontology above, the design change search method is devised to catch the geometric changes; the influence search method is proposed to discover the influenced design feature and the design change adapting method is used to preserve the correlation coherence after perception. Through the work, the conventional design perception mode among designers has transformed into direct perception among models instead.

  9. SHARING ON WEB 3D MODELS OF ANCIENT THEATRES. A METHODOLOGICAL WORKFLOW

    Directory of Open Access Journals (Sweden)

    A. Scianna

    2016-06-01

    Full Text Available In the last few years, the need to share on the Web the knowledge of Cultural Heritage (CH through navigable 3D models has increased. This need requires the availability of Web-based virtual reality systems and 3D WEBGIS. In order to make the information available to all stakeholders, these instruments should be powerful and at the same time very user-friendly. However, research and experiments carried out so far show that a standardized methodology doesn’t exist. All this is due both to complexity and dimensions of geometric models to be published, on the one hand, and to excessive costs of hardware and software tools, on the other. In light of this background, the paper describes a methodological approach for creating 3D models of CH, freely exportable on the Web, based on HTML5 and free and open source software. HTML5, supporting the WebGL standard, allows the exploration of 3D spatial models using most used Web browsers like Chrome, Firefox, Safari, Internet Explorer. The methodological workflow here described has been tested for the construction of a multimedia geo-spatial platform developed for three-dimensional exploration and documentation of the ancient theatres of Segesta and of Carthage, and the surrounding landscapes. The experimental application has allowed us to explore the potential and limitations of sharing on the Web of 3D CH models based on WebGL standard. Sharing capabilities could be extended defining suitable geospatial Web-services based on capabilities of HTML5 and WebGL technology.

  10. Modeling of Throughput in Production Lines Using Response Surface Methodology and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Federico Nuñez-Piña

    2018-01-01

    Full Text Available The problem of assigning buffers in a production line to obtain an optimum production rate is a combinatorial problem of type NP-Hard and it is known as Buffer Allocation Problem. It is of great importance for designers of production systems due to the costs involved in terms of space requirements. In this work, the relationship among the number of buffer slots, the number of work stations, and the production rate is studied. Response surface methodology and artificial neural network were used to develop predictive models to find optimal throughput values. 360 production rate values for different number of buffer slots and workstations were used to obtain a fourth-order mathematical model and four hidden layers’ artificial neural network. Both models have a good performance in predicting the throughput, although the artificial neural network model shows a better fit (R=1.0000 against the response surface methodology (R=0.9996. Moreover, the artificial neural network produces better predictions for data not utilized in the models construction. Finally, this study can be used as a guide to forecast the maximum or near maximum throughput of production lines taking into account the buffer size and the number of machines in the line.

  11. A modelling methodology for assessing the impact of climate variability and climatic change on hydroelectric generation

    International Nuclear Information System (INIS)

    Munoz, J.R.; Sailor, D.J.

    1998-01-01

    A new methodology relating basic climatic variables to hydroelectric generation was developed. The methodology can be implemented in large or small basins with any number of hydro plants. The method was applied to the Sacramento, Eel and Russian river basins in northern California where more than 100 hydroelectric plants are located. The final model predicts the availability of hydroelectric generation for the entire basin provided present and near past climate conditions, with about 90% accuracy. The results can be used for water management purposes or for analyzing the effect of climate variability on hydrogeneration availability in the basin. A wide range of results can be obtained depending on the climate change scenario used. (Author)

  12. Mathematical Methodology for New Modeling of Water Hammer in Emergency Core Cooling System

    International Nuclear Information System (INIS)

    Lee, Seungchan; Yoon, Dukjoo; Ha, Sangjun

    2013-01-01

    In engineering insight, the water hammer study has carried out through the experimental work and the fluid mechanics. In this study, a new access methodology is introduced by Newton mechanics and a mathematical method. Also, NRC Generic Letter 2008-01 requires nuclear power plant operators to evaluate the effect of water-hammer for the protection of pipes of the Emergency Core Cooling System, which is related to the Residual Heat Removal System and the Containment Spray System. This paper includes modeling, the processes of derivation of the mathematical equations and the comparison with other experimental work. To analyze the effect of water-hammer, this mathematical methodology is carried out. This study is in good agreement with other experiment results as above. This method is very efficient to explain the water-hammer phenomena

  13. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  14. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for

  15. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  16. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  17. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    International Nuclear Information System (INIS)

    Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.

    2009-11-01

    starting point we built Statistical Fracture Domains whose significance rely exclusively on fracturing statistics, not including explicitly the current Fracture Domains or closeness between one borehole section or the other. Theoretical developments are proposed in order to incorporate the orientation uncertainty and the fracturing variability into a resulting parent distribution density uncertainty. When applied to both sites, it comes that variability prevails in front of uncertainty, thus validating the good level of data accuracy. Moreover, this allows to define a possible range of variation around the mean values of densities. Finally a sorting algorithm is developed for providing, from the initial elementary bricks mentioned above, a division of a site into Statistical Fracture Domains whose internal variability is reduced. The systematic comparison is based on the division of the datasets according to several densities referring to a division of the orientations into 13 subsets (pole zones). The first application of the methodology shows that some main trends can be defined for the orientation/density distributions throughout the site, which are combined with a high level of overlapping. Moreover the final Statistical Fracture Domain definition differ from the Fracture Domains existing at the site. The SFD are an objective comparison of statistical fracturing properties. Several perspectives are proposed in order to bridge the gap between constraints brought by a relevant statistical modeling and modeling specificities of the SKB sites and more generally conditions inherent to geological models

  18. 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Koziel, Slawomir; Kacprzyk, Janusz; Leifsson, Leifur; Ören, Tuncer

    2015-01-01

    This book includes extended and revised versions of a set of selected papers from the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2013) which was co-organized by the Reykjavik University (RU) and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC). SIMULTECH 2013 was held in cooperation with the ACM SIGSIM - Special Interest Group (SIG) on SImulation and Modeling (SIM), Movimento Italiano Modellazione e Simulazione (MIMOS) and AIS Special Interest Group on Modeling and Simulation (AIS SIGMAS) and technically co-sponsored by the Society for Modeling & Simulation International (SCS), Liophant Simulation, Simulation Team and International Federation for Information Processing (IFIP). This proceedings brings together researchers, engineers, applied mathematicians and practitioners working in the advances and applications in the field of system simulation.

  19. Power Prediction Model for Turning EN-31 Steel Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    M. Hameedullah

    2010-01-01

    Full Text Available Power consumption in turning EN-31 steel (a material that is most extensively used in automotive industry with tungstencarbide tool under different cutting conditions was experimentally investigated. The experimental runs were planned accordingto 24+8 added centre point factorial design of experiments, replicated thrice. The data collected was statisticallyanalyzed using Analysis of Variance technique and first order and second order power consumption prediction models weredeveloped by using response surface methodology (RSM. It is concluded that second-order model is more accurate than thefirst-order model and fit well with the experimental data. The model can be used in the automotive industries for decidingthe cutting parameters for minimum power consumption and hence maximum productivity

  20. Topobathymetric elevation model development using a new methodology: Coastal National Elevation Database

    Science.gov (United States)

    Danielson, Jeffrey J.; Poppenga, Sandra K.; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy A.; Barras, John

    2016-01-01

    During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models

  1. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  2. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  3. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  4. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    Science.gov (United States)

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  5. Probabilistic risk assessment modeling of digital instrumentation and control systems using two dynamic methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, T., E-mail: aldemir.1@osu.ed [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Guarro, S. [ASCA, Inc., 1720 S. Catalina Avenue, Suite 220, Redondo Beach, CA 90277-5501 (United States); Mandelli, D. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Kirschenbaum, J. [Ohio State University, Department of Computer Science and Engineering, Columbus, OH 43210 (United States); Mangan, L.A. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Bucci, P. [Ohio State University, Department of Computer Science and Engineering, Columbus, OH 43210 (United States); Yau, M. [ASCA, Inc., 1720 S. Catalina Avenue, Suite 220, Redondo Beach, CA 90277-5501 (United States); Ekici, E. [Ohio State University, Department of Electrical and Computer Engineering, Columbus, OH 43210 (United States); Miller, D.W.; Sun, X. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Arndt, S.A. [U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001 (United States)

    2010-10-15

    The Markov/cell-to-cell mapping technique (CCMT) and the dynamic flowgraph methodology (DFM) are two system logic modeling methodologies that have been proposed to address the dynamic characteristics of digital instrumentation and control (I and C) systems and provide risk-analytical capabilities that supplement those provided by traditional probabilistic risk assessment (PRA) techniques for nuclear power plants. Both methodologies utilize a discrete state, multi-valued logic representation of the digital I and C system. For probabilistic quantification purposes, both techniques require the estimation of the probabilities of basic system failure modes, including digital I and C software failure modes, that appear in the prime implicants identified as contributors to a given system event of interest. As in any other system modeling process, the accuracy and predictive value of the models produced by the two techniques, depend not only on the intrinsic features of the modeling paradigm, but also and to a considerable extent on information and knowledge available to the analyst, concerning the system behavior and operation rules under normal and off-nominal conditions, and the associated controlled/monitored process dynamics. The application of the two methodologies is illustrated using a digital feedwater control system (DFWCS) similar to that of an operating pressurized water reactor. This application was carried out to demonstrate how the use of either technique, or both, can facilitate the updating of an existing nuclear power plant PRA model following an upgrade of the instrumentation and control system from analog to digital. Because of scope limitations, the focus of the demonstration of the methodologies was intentionally limited to aspects of digital I and C system behavior for which probabilistic data was on hand or could be generated within the existing project bounds of time and resources. The data used in the probabilistic quantification portion of the

  6. Using a Negative Binomial Regression Model for Early Warning at the Start of a Hand Foot Mouth Disease Epidemic in Dalian, Liaoning Province, China.

    Science.gov (United States)

    An, Qingyu; Wu, Jun; Fan, Xuesong; Pan, Liyang; Sun, Wei

    2016-01-01

    The hand foot and mouth disease (HFMD) is a human syndrome caused by intestinal viruses like that coxsackie A virus 16, enterovirus 71 and easily developed into outbreak in kindergarten and school. Scientifically and accurately early detection of the start time of HFMD epidemic is a key principle in planning of control measures and minimizing the impact of HFMD. The objective of this study was to establish a reliable early detection model for start timing of hand foot mouth disease epidemic in Dalian and to evaluate the performance of model by analyzing the sensitivity in detectability. The negative binomial regression model was used to estimate the weekly baseline case number of HFMD and identified the optimal alerting threshold between tested difference threshold values during the epidemic and non-epidemic year. Circular distribution method was used to calculate the gold standard of start timing of HFMD epidemic. From 2009 to 2014, a total of 62022 HFMD cases were reported (36879 males and 25143 females) in Dalian, Liaoning Province, China, including 15 fatal cases. The median age of the patients was 3 years. The incidence rate of epidemic year ranged from 137.54 per 100,000 population to 231.44 per 100,000population, the incidence rate of non-epidemic year was lower than 112 per 100,000 population. The negative binomial regression model with AIC value 147.28 was finally selected to construct the baseline level. The threshold value was 100 for the epidemic year and 50 for the non- epidemic year had the highest sensitivity(100%) both in retrospective and prospective early warning and the detection time-consuming was 2 weeks before the actual starting of HFMD epidemic. The negative binomial regression model could early warning the start of a HFMD epidemic with good sensitivity and appropriate detection time in Dalian.

  7. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    Science.gov (United States)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on

  8. New methodologies for calculation of flight parameters on reduced scale wings models in wind tunnel =

    Science.gov (United States)

    Ben Mosbah, Abdallah

    In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their

  9. Frescoed Vaults: Accuracy Controlled Simplified Methodology for Planar Development of Three-Dimensional Textured Models

    Directory of Open Access Journals (Sweden)

    Marco Giorgio Bevilacqua

    2016-03-01

    Full Text Available In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface representations. The transition from 3D to 2D representation, with the related geometric transformations, has not yet been fully formalized for planar development of frescoed vaults. Methodologies proposed so far on this subject provide transitioning from point cloud models to ideal mathematical surfaces and projecting textures using software tools. The methodology used for geometry and texture development in the present work does not require any dedicated software. The different processing steps can be individually checked for any error introduced, which can be then quantified. A direct accuracy check of the planar development of the frescoed surface has been carried out by qualified restorers, yielding a result of 3 mm. The proposed methodology, although requiring further studies to improve automation of the different processing steps, allowed extracting 2D drafts fully usable by operators restoring the vault frescoes.

  10. A novel methodology improves reservoir characterization models using geologic fuzzy variables

    Energy Technology Data Exchange (ETDEWEB)

    Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

    2004-07-01

    One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

  11. Boolean modeling in systems biology: an overview of methodology and applications

    International Nuclear Information System (INIS)

    Wang, Rui-Sheng; Albert, Réka; Saadatpour, Assieh

    2012-01-01

    Mathematical modeling of biological processes provides deep insights into complex cellular systems. While quantitative and continuous models such as differential equations have been widely used, their use is obstructed in systems wherein the knowledge of mechanistic details and kinetic parameters is scarce. On the other hand, a wealth of molecular level qualitative data on individual components and interactions can be obtained from the experimental literature and high-throughput technologies, making qualitative approaches such as Boolean network modeling extremely useful. In this paper, we build on our research to provide a methodology overview of Boolean modeling in systems biology, including Boolean dynamic modeling of cellular networks, attractor analysis of Boolean dynamic models, as well as inferring biological regulatory mechanisms from high-throughput data using Boolean models. We finally demonstrate how Boolean models can be applied to perform the structural analysis of cellular networks. This overview aims to acquaint life science researchers with the basic steps of Boolean modeling and its applications in several areas of systems biology. (paper)

  12. Modeling companion diagnostics in economic evaluations of targeted oncology therapies: systematic review and methodological checklist.

    Science.gov (United States)

    Doble, Brett; Tan, Marcus; Harris, Anthony; Lorgelly, Paula

    2015-02-01

    The successful use of a targeted therapy is intrinsically linked to the ability of a companion diagnostic to correctly identify patients most likely to benefit from treatment. The aim of this study was to review the characteristics of companion diagnostics that are of importance for inclusion in an economic evaluation. Approaches for including these characteristics in model-based economic evaluations are compared with the intent to describe best practice methods. Five databases and government agency websites were searched to identify model-based economic evaluations comparing a companion diagnostic and subsequent treatment strategy to another alternative treatment strategy with model parameters for the sensitivity and specificity of the companion diagnostic (primary synthesis). Economic evaluations that limited model parameters for the companion diagnostic to only its cost were also identified (secondary synthesis). Quality was assessed using the Quality of Health Economic Studies instrument. 30 studies were included in the review (primary synthesis n = 12; secondary synthesis n = 18). Incremental cost-effectiveness ratios may be lower when the only parameter for the companion diagnostic included in a model is the cost of testing. Incorporating the test's accuracy in addition to its cost may be a more appropriate methodological approach. Altering the prevalence of the genetic biomarker, specific population tested, type of test, test accuracy and timing/sequence of multiple tests can all impact overall model results. The impact of altering a test's threshold for positivity is unknown as it was not addressed in any of the included studies. Additional quality criteria as outlined in our methodological checklist should be considered due to the shortcomings of standard quality assessment tools in differentiating studies that incorporate important test-related characteristics and those that do not. There is a need to refine methods for incorporating the characteristics

  13. Modeling collective animal behavior with a cognitive perspective: a methodological framework.

    Directory of Open Access Journals (Sweden)

    Sebastian Weitz

    Full Text Available The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the

  14. Methodology for geometric modelling. Presentation and administration of site descriptive models; Metodik foer geometrisk modellering. Presentation och administration av platsbeskrivande modeller

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan [Golder Associates (Sweden)

    2001-03-01

    This report presents a methodology to construct, visualise and present geoscientific descriptive models based on data from the site investigations, which the SKB currently performs, to build an underground nuclear waste disposal facility in Sweden. It is designed for interaction with SICADA (SKB:s site characterisation database) and RVS (SKB:s Rock Visualisation System). However, the concepts of the methodology are general and can be used with other tools capable of handling 3D geometries and parameters. The descriptive model is intended to be an instrument where site investigation data from all disciplines are put together to form a comprehensive visual interpretation of the studied rock mass. The methodology has four main components: 1. Construction of a geometrical model of the interpreted main structures at the site. 2. Description of the geoscientific characteristics of the structures. 3. Description and geometrical implementation of the geometric uncertainties in the interpreted model structures. 4. Quality system for the handling of the geometrical model, its associated database and some aspects of the technical auditing. The geometrical model forms a basis for understanding the main elements and structures of the investigated site. Once the interpreted geometries are in place in the model, the system allows for adding descriptive and quantitative data to each modelled object through a system of intuitive menus. The associated database allows each geometrical object a complete quantitative description of all geoscientific disciplines, variabilities, uncertainties in interpretation and full version history. The complete geometrical model and its associated database of object descriptions are to be recorded in a central quality system. Official, new and old versions of the model are administered centrally in order to have complete quality assurance of each step in the interpretation process. The descriptive model is a cornerstone in the understanding of the

  15. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    ) assessment of property model prediction errors, (iii) effect of outliers and data pre-treatment, (iv) formulation of parameter estimation problem (e.g. weighted least squares, ordinary least squares, robust regression, etc.) In this study a comprehensive methodology is developed to perform a rigorous......) weighted-least-square regression. 3) Initialization of estimation by use of linear algebra providing a first guess. 4) Sequential parameter and simultaneous GC parameter by using of 4 different minimization algorithms. 5) Thorough uncertainty analysis: a) based on asymptotic approximation of parameter...

  16. Optimization of the propulsion cycles for advanced shuttles. I - Propulsion mass model methodology

    Science.gov (United States)

    Manski, Detlef; Martin, James A.

    1989-01-01

    During the past year NASA and DLR computer codes were combined in order to analyze and optimize advanced rocket launcher systems. Previous optimization results led to relatively low chamber pressures and nozzle area ratios for the different cycles. Additional effort has been made to improve the verification of propulsive parameters such as chamber pressure, nozzle extension, kind of cycle, tripropellant systems, variable mixture ratio rocket motors, new technology effects, etc. The emphasis of this paper is to describe the methodology of the improved propulsion mass model.

  17. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...

  18. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? A Case Study on Electric Cars.

    Science.gov (United States)

    Font Vivanco, David; Tukker, Arnold; Kemp, René

    2016-10-18

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes in demand, however, choices related to modeling the environmental burdens from such changes have received less attention. In this study, we analyze choices in the environmental assessment methods (life cycle assessment (LCA) and hybrid LCA) and environmental input-output databases (E3IOT, Exiobase and WIOD) used as a source of bias. The analysis is done for a case study on battery electric and hydrogen cars in Europe. The results describe moderate rebound effects for both technologies in the short term. Additionally, long-run scenarios are calculated by simulating the total cost of ownership, which describe notable rebound effect sizes-from 26 to 59% and from 18 to 28%, respectively, depending on the methodological choices-with favorable economic conditions. Relevant sources of bias are found to be related to incomplete background systems, technology assumptions and sectorial aggregation. These findings highlight the importance of the method setup and of sensitivity analyses of choices related to environmental modeling in rebound effect assessments.

  19. A new methodology to determine kinetic parameters for one- and two-step chemical models

    Science.gov (United States)

    Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.

    1996-01-01

    In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.

  20. Methodological aspects of modeling household solid waste generation in Japan: Evidence from Okayama and Otsu cities.

    Science.gov (United States)

    Gu, Binxian; Fujiwara, Takeshi; Jia, Renfu; Duan, Ruiyang; Gu, Aijun

    2017-12-01

    This paper presents a quantitative methodology and two empirical case studies in Japan on modeling household solid waste (HSW) generation based on individual consumption expenditure (ICE) and local waste policy effects by using the coupled estimation model systems. Results indicate that ICE on food, miscellaneous commodities and services, as well as education, cultural, and recreation services are mainly associated with the changes of HSW generation and its components in Okayama and Otsu from 1980 to 2014. The effects of waste policy measures were also identified. HSW generation in Okayama will increase from 11.60 million tons (mt) in 1980 to 25.02 mt in 2025, and the corresponding figures are 6.82 mt (in 1980) and 14.00 mt (in 2025) in Otsu. To better manage local HSW, several possible and appropriate implications such as promoting a green lifestyle, extending producer responsibility, intensifying recycling and source separation, generalizing composting, and establishing flexible measures and sustainable policies should be adopted. Results of this study would facilitate consumer management of low waste generation and support an effective HSW policy design in the two case cities. Success could lead to emulation by other Japanese cities seeking to build and maintain a sustainable, eco-friendly society. Moreover, the methodologies of establishing coupled estimation model systems could be extended to China and other global cities.

  1. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  2. A model for overview of student learning: a matrix of educational outcomes versus methodologies.

    Science.gov (United States)

    Johnsen, David C; Marshall, Teresa A; Finkelstein, Michael W; Cunningham-Ford, Marsha A; Straub-Morarend, Cheryl L; Holmes, David C; Armstrong, Steven R; Aquilino, Steven A; Sharp, Helen M; Solow, Catherine M; McQuistan, Michelle R

    2011-02-01

    A concise overview of an institution's aspirations for its students becomes increasingly elusive because dental education has evolving emphases on priorities like critical thinking and adapting to new technology. The purpose of this article is to offer a learner-oriented matrix that gives a focus for discussion and an overview of an institution's educational outcomes. On one axis of the matrix, common educational outcomes are listed: knowledge, technical skills, critical thinking, ethical and professional values, patient and practice management, and social responsibility awareness. On the other axis, methodologies are listed: definition, cultivation strategies, measures (summative/formative, objective/subjective), institutional coordination, and competency determination. By completing the matrix, an overview of the process by which students reach these outcomes emerges. Each institution would likely complete the matrix differently and, ideally, with active discussion. While the matrix can first be used to establish "Where are we now?" for an institution, it can also be a starting point for more extensive matrices and further discussion. Vertical and horizontal analyses of the matrix provide a unique lens for viewing the institution's learning environment.

  3. METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL

    Directory of Open Access Journals (Sweden)

    Tomasz HANISZEWSKI

    2015-03-01

    Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].

  4. Bridging the Gap of Standardized Animals Models for Blast Neurotrauma: Methodology for Appropriate Experimental Testing.

    Science.gov (United States)

    VandeVord, Pamela J; Leonardi, Alessandra Dal Cengio; Ritzel, David

    2016-01-01

    Recent military combat has heightened awareness to the complexity of blast-related traumatic brain injuries (bTBI). Experiments using animal, cadaver, or biofidelic physical models remain the primary measures to investigate injury biomechanics as well as validate computational simulations, medical diagnostics and therapies, or protection technologies. However, blast injury research has seen a range of irregular and inconsistent experimental methods for simulating blast insults generating results which may be misleading, cannot be cross-correlated between laboratories, or referenced to any standard for exposure. Both the US Army Medical Research and Materiel Command and the National Institutes of Health have noted that there is a lack of standardized preclinical models of TBI. It is recommended that the blast injury research community converge on a consistent set of experimental procedures and reporting of blast test conditions. This chapter describes the blast conditions which can be recreated within a laboratory setting and methodology for testing in vivo models within the appropriate environment.

  5. The epistemology of mathematical and statistical modeling: a quiet methodological revolution.

    Science.gov (United States)

    Rodgers, Joseph Lee

    2010-01-01

    A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the modeling revolution obviated the NHST argument. I begin with a history of NHST and modeling and their relation to one another. Next, I define and illustrate principles involved in developing and evaluating mathematical models. Following, I discuss the difference between using statistical procedures within a rule-based framework and building mathematical models from a scientific epistemology. Only the former is treated carefully in most psychology graduate training. The pedagogical implications of this imbalance and the revised pedagogy required to account for the modeling revolution are described. To conclude, I discuss how attention to modeling implies shifting statistical practice in certain progressive ways. The epistemological basis of statistics has moved away from being a set of procedures, applied mechanistically, and moved toward building and evaluating statistical and scientific models. Copyrigiht 2009 APA, all rights reserved.

  6. Methodology for Training Small Domain-specific Language Models and Its Application in Service Robot Speech Interface

    Directory of Open Access Journals (Sweden)

    ONDAS Stanislav

    2014-05-01

    Full Text Available The proposed paper introduces the novel methodology for training small domain-specific language models only from domain vocabulary. Proposed methodology is intended for situations, when no training data are available and preparing of appropriate deterministic grammar is not trivial task. Methodology consists of two phases. In the first phase the “random” deterministic grammar, which enables to generate all possible combination of unigrams and bigrams is constructed from vocabulary. Then, prepared random grammar serves for generating the training corpus. The “random” n-gram model is trained from generated corpus, which can be adapted in second phase. Evaluation of proposed approach has shown usability of the methodology for small domains. Results of methodology assessment favor designed method instead of constructing the appropriate deterministic grammar.

  7. Prototype methodology for obtaining cloud seeding guidance from HRRR model data

    Science.gov (United States)

    Dawson, N.; Blestrud, D.; Kunkel, M. L.; Waller, B.; Ceratto, J.

    2017-12-01

    Weather model data, along with real time observations, are critical to determine whether atmospheric conditions are prime for super-cooled liquid water during cloud seeding operations. Cloud seeding groups can either use operational forecast models, or run their own model on a computer cluster. A custom weather model provides the most flexibility, but is also expensive. For programs with smaller budgets, openly-available operational forecasting models are the de facto method for obtaining forecast data. The new High-Resolution Rapid Refresh (HRRR) model (3 x 3 km grid size), developed by the Earth System Research Laboratory (ESRL), provides hourly model runs with 18 forecast hours per run. While the model cannot be fine-tuned for a specific area or edited to provide cloud-seeding-specific output, model output is openly available on a near-real-time basis. This presentation focuses on a prototype methodology for using HRRR model data to create maps which aid in near-real-time cloud seeding decision making. The R programming language is utilized to run a script on a Windows® desktop/laptop computer either on a schedule (such as every half hour) or manually. The latest HRRR model run is downloaded from NOAA's Operational Model Archive and Distribution System (NOMADS). A GRIB-filter service, provided by NOMADS, is used to obtain surface and mandatory pressure level data for a subset domain which greatly cuts down on the amount of data transfer. Then, a set of criteria, identified by the Idaho Power Atmospheric Science Group, is used to create guidance maps. These criteria include atmospheric stability (lapse rates), dew point depression, air temperature, and wet bulb temperature. The maps highlight potential areas where super-cooled liquid water may exist, reasons as to why cloud seeding should not be attempted, and wind speed at flight level.

  8. A new methodology for dynamic modelling of health risks arising from wastewater influenced urban flooding

    Science.gov (United States)

    Jørgensen, Claus; Mark, Ole; Djordjevic, Slobodan; Hammond, Michael; Khan, David M.; Erichsen, Anders; Dorrit Enevoldsen, Ann; Heinicke, Gerald; Helwigh, Birgitte

    2015-04-01

    Indroduction Urban flooding due to rainfall exceeding the design capacity of drainage systems is a global problem and it has significant economic and social consequences. While the cost of the direct flood damages of urban flooding is well understood, the indirect damages, like the water borne diseases is in general still poorly understood. Climate changes are expected to increase the frequency of urban flooding in many countries which is likely to increase water borne diseases. Diarrheal diseases are most prevalent in developing countries, where poor sanitation, poor drinking water and poor surface water quality causes a high disease burden and mortality, especially during floods. The level of water borne diarrhea in countries with well-developed water and waste water infrastructure has been reduced to an acceptable level, and the population in general do not consider waste water as being a health risk. Hence, exposure to wastewater influenced urban flood water still has the potential to cause transmission of diarrheal diseases. When managing urban flooding and planning urban climate change adaptations, health risks are rarely taken into consideration. This paper outlines a novel methodology for linking dynamic urban flood modelling with Quantitative Microbial Risk Assessment (QMRA). This provides a unique possibility for understanding the interaction between urban flooding and the health risks caused by direct human contact with flood water and provides an option for reducing the burden of disease in the population through the use of intelligent urban flood risk management. Methodology We have linked hydrodynamic urban flood modelling with quantitative microbial risk assessment (QMRA) to determine the risk of infection caused by exposure to wastewater influenced urban flood water. The deterministic model MIKE Flood, which integrates the sewer network model in MIKE Urban and the 2D surface model MIKE21, was used to calculate the concentration of pathogens in the

  9. A system-of-systems modeling methodology for strategic general aviation design decision-making

    Science.gov (United States)

    Won, Henry Thome

    General aviation has long been studied as a means of providing an on-demand "personal air vehicle" that bypasses the traffic at major commercial hubs. This thesis continues this research through development of a system of systems modeling methodology applicable to the selection of synergistic product concepts, market segments, and business models. From the perspective of the conceptual design engineer, the design and selection of future general aviation aircraft is complicated by the definition of constraints and requirements, and the tradeoffs among performance and cost aspects. Qualitative problem definition methods have been utilized, although their accuracy in determining specific requirement and metric values is uncertain. In industry, customers are surveyed, and business plans are created through a lengthy, iterative process. In recent years, techniques have developed for predicting the characteristics of US travel demand based on travel mode attributes, such as door-to-door time and ticket price. As of yet, these models treat the contributing systems---aircraft manufacturers and service providers---as independently variable assumptions. In this research, a methodology is developed which seeks to build a strategic design decision making environment through the construction of a system of systems model. The demonstrated implementation brings together models of the aircraft and manufacturer, the service provider, and most importantly the travel demand. Thus represented is the behavior of the consumers and the reactive behavior of the suppliers---the manufacturers and transportation service providers---in a common modeling framework. The results indicate an ability to guide the design process---specifically the selection of design requirements---through the optimization of "capability" metrics. Additionally, results indicate the ability to find synergetic solutions, that is solutions in which two systems might collaborate to achieve a better result than acting

  10. A MAINTENANCE STRATEGY MODEL FOR STATIC EQUIPMENT USING INSPECTION METHODOLOGIES AND RISK MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.K. Visser

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Mechanical equipment used on process plants can be categorised into two main types, namely static and rotating equipment. A brief survey at a number of chemical process plants indicated that a number of maintenance strategies exist and are used for rotating equipment. However, some of these strategies are not directly applicable to static equipment, although the risk-based inspection (RBI methodology has been developed for pressure vessels. A generalised risk-based maintenance strategy for all types of static equipment does not currently exist. This paper describes the development of an optimised model of inspection methodologies, maintenance strategies, and risk management principles that are generically applicable for static equipment. It enables maintenance managers and engineers to select an applicable maintenance strategy and inspection methodology, based on the operational and business risks posed by the individual pieces of equipment.

    AFRIKAANSE OPSOMMING: Meganiese toerusting wat op prosesaanlegte gebruik word kan in twee kategorieë verdeel word, naamlik statiese en roterende toerusting. 'n Bondige ondersoek by 'n aantal chemiese prosesaanlegte het aangedui dat 'n aantal strategieë vir instandhouding van roterende toerusting gebruik word, terwyl die risikogebaseerde inspeksiemetodologie wel vir drukvate gebruik word. 'n Algemene risikogebaseerde instandhoudingstrategie vir alle tipes statiese toerusting is egter nie tans beskikbaar nie. Hierdie artikel beskryf die ontwikkeling van 'n geoptimeerde model van inspeksiemetodologieë, instandhoudingstrategieë, en risikobestuursbeginsels wat algemeen gebruik kan word vir statiese toerusting. Dit stel die instandhouding-bestuurders en -ingenieurs in staat om 'n instandhoudingstrategie en inspeksie-metodologie te kies, gebaseer op die operasionele en besigheidsrisiko's van die individuele toerusting.

  11. Modelling and optimization of process variables for the solution polymerization of styrene using response surface methodology

    Directory of Open Access Journals (Sweden)

    Rasheed Uthman Owolabi

    2018-01-01

    Full Text Available A satisfactory model for predicting monomer conversion in free radical polymerization has been a challenge due to the complexity and rigors associated with classical kinetic models. This renders the usage of such model an exciting endeavour in the academia but not exactly so in industrial practice. In this study, the individual and interactive effects of three processing conditions (reaction temperature, reaction time and initiator concentration on monomer conversion in the solution polymerization of styrene using acetone as solvent was investigated in a batch reactor through the central composite design (CCD model of response surface methodology (RSM for experimental design, modelling and process optimization. The modelled optimization conditions are: reaction time of 30 min, reaction temperature of 120 °C, and initiator concentration of 0.1135 mol/l, with the corresponding monomer conversion of 76.82% as compared to the observed conversion of 70.86%. A robust model for predicting monomer conversion that is very suitable for routine industrial usage is thus obtained.

  12. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation

  13. Modeling and Analysis of The Pressure Die Casting Using Response Surface Methodology

    International Nuclear Information System (INIS)

    Kittur, Jayant K.; Herwadkar, T. V.; Parappagoudar, M. B.

    2010-01-01

    Pressure die casting is successfully used in the manufacture of Aluminum alloys components for automobile and many other industries. Die casting is a process involving many process parameters having complex relationship with the quality of the cast product. Though various process parameters have influence on the quality of die cast component, major influence is seen by the die casting machine parameters and their proper settings. In the present work, non-linear regression models have been developed for making predictions and analyzing the effect of die casting machine parameters on the performance characteristics of die casting process. Design of Experiments (DOE) with Response Surface Methodology (RSM) has been used to analyze the effect of effect of input parameters and their interaction on the response and further used to develop nonlinear input-output relationships. Die casting machine parameters, namely, fast shot velocity, slow shot to fast shot change over point, intensification pressure and holding time have been considered as the input variables. The quality characteristics of the cast product were determined by porosity, hardness and surface rough roughness (output/responses). Design of experiments has been used to plan the experiments and analyze the impact of variables on the quality of casting. On the other-hand Response Surface Methodology (Central Composite Design) is utilized to develop non-linear input-output relationships (regression models). The developed regression models have been tested for their statistical adequacy through ANOVA test. The practical usefulness of these models has been tested with some test cases. These models can be used to make the predictions about different quality characteristics, for the known set of die casting machine parameters, without conducting the experiments.

  14. HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS. PART I: SCOPING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, B; Donald L. Anton, D

    2008-12-22

    Detailed models for hydrogen storage systems provide essential design information about flow and temperature distributions, as well as, the utilization of a hydrogen storage media. However, before constructing a detailed model it is necessary to know the geometry and length scales of the system, along with its heat transfer requirements, which depend on the limiting reaction kinetics. More fundamentally, before committing significant time and resources to the development of a detailed model, it is necessary to know whether a conceptual storage system design is viable. For this reason, a hierarchical system of models progressing from scoping models to detailed analyses was developed. This paper, which discusses the scoping models, is the first in a two part series that presents a collection of hierarchical models for the design and evaluation of hydrogen storage systems.

  15. PAGIS summary report of phase 1: a common methodological approach based on European data and models

    International Nuclear Information System (INIS)

    Cadelli, N.; Cottone, G.; Bertozzi, G.; Girardi, F.

    1984-01-01

    Since 1982 a joint study has been launched by the CEC with the participation of national institutions in the E.C., aiming at a Performance Assessment of Geological Isolation Systems (PAGIS) for HLW disposal. This document is a summary of the first phase of the study which was devoted to the collection of data and models and to the choice of an appropriate methodology. To this purpose, real or national sites have been chosen, which are representative of three types of continental geological formations in the E.C.: clay, granite and salt (although the choices imply no committment of any kind about their final use). Moreover, sub-seabed areas have also been identified. The study covers the following items: - basic data on waste characteristics, site data and repository designs; - methodology, which allows sensitivity and uncertainty analyses to be performed, as well as the assessment of radiation doses to individuals and populations; - preliminary modelling of radionuclide release and migration through the geosphere (near- and far-field) and the biosphere following their various pathways to man; - selection of the most relevant radionuclide release scenarios and their probability of occurrence. Reference values have been selected for the basic data as well as variants covering the various options which are under consideration in the different Countries of the E.C.

  16. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2012-07-01

    Full Text Available Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT and Web of Things (WoT are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i to integrate sensing and actuating functionalities into everyday objects, and (ii to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD methodology based on the Model Driven Architecture (MDA. This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  17. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    Science.gov (United States)

    Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  18. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  19. Methodology for Outdoor Water Savings Model and Spreadsheet Tool for U.S. and Selected States

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Alison A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Yuting [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dunham, Camilla [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stratton, Hannah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-07-31

    Green lawns and landscaping are archetypical of the populated American landscape, and typically require irrigation, which corresponds to a significant fraction of residential, commercial, and institutional water use. In North American cities, the estimated portion of residential water used for outdoor purposes ranges from 22-38% in cooler climates up to 59-67% in dry and hot environments, while turfgrass coverage within the United States spans 11.1-20.2 million hectares (Milesi et al. 2009). One national estimate uses satellite and aerial photography data to develop a relationship between impervious surface and lawn surface area, yielding a conservative estimate of 16.4 (± 3.6) million hectares of lawn surface area in the United States—an area three times larger than that devoted to any irrigated crop (Milesi et al. 2005). One approach that holds promise for cutting unnecessary outdoor water use is the increased deployment of “smart” irrigation controllers to increase the water efficiency of irrigation systems. This report describes the methodology and inputs employed in a mathematical model that quantifies the effects of the U.S. Environmental Protection Agency’s WaterSense labeling program for one such type of controller, weather-based irrigation controllers (WBIC). This model builds off that described in “Methodology for National Water Savings Model and Spreadsheet Tool–Outdoor Water Use” and uses a two-tiered approach to quantify outdoor water savings attributable to the WaterSense program for WBIC, as well as net present value (NPV) of that savings. While the first iteration of the model assessed national impacts using averaged national values, this version begins by evaluating impacts in three key large states that make up a sizable portion of the irrigation market: California, Florida, and Texas. These states are considered to be the principal market of “smart” irrigation controllers that may result in the bulk of national savings. Modeled

  20. State-space models for bio-loggers: A methodological road map

    DEFF Research Database (Denmark)

    Jonsen, I.D.; Basson, M.; Bestley, S.

    2012-01-01

    Ecologists have an unprecedented array of bio-logging technologies available to conduct in situ studies of horizontal and vertical movement patterns of marine animals. These tracking data provide key information about foraging, migratory, and other behaviours that can be linked with bio...... development of state-space modelling approaches for animal movement data provides statistical rigor for inferring hidden behavioural states, relating these states to bio-physical data, and ultimately for predicting the potential impacts of climate change. Despite the widespread utility, and current popularity......, of state-space models for analysis of animal tracking data, these tools are not simple and require considerable care in their use. Here we develop a methodological “road map” for ecologists by reviewing currently available state-space implementations. We discuss appropriate use of state-space methods...

  1. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    Science.gov (United States)

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided.

  2. A generalized methodology for identification of threshold for HRU delineation in SWAT model

    Science.gov (United States)

    M, J.; Sudheer, K.; Chaubey, I.; Raj, C.

    2016-12-01

    The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation

  3. Application of fault tree methodology to modeling of the AP1000 plant digital reactor protection system

    International Nuclear Information System (INIS)

    Teolis, D.S.; Zarewczynski, S.A.; Detar, H.L.

    2012-01-01

    The reactor trip system (RTS) and engineered safety features actuation system (ESFAS) in nuclear power plants utilizes instrumentation and control (IC) to provide automatic protection against unsafe and improper reactor operation during steady-state and transient power operations. During normal operating conditions, various plant parameters are continuously monitored to assure that the plant is operating in a safe state. In response to deviations of these parameters from pre-determined set points, the protection system will initiate actions required to maintain the reactor in a safe state. These actions may include shutting down the reactor by opening the reactor trip breakers and actuation of safety equipment based on the situation. The RTS and ESFAS are represented in probabilistic risk assessments (PRAs) to reflect the impact of their contribution to core damage frequency (CDF). The reactor protection systems (RPS) in existing nuclear power plants are generally analog based and there is general consensus within the PRA community on fault tree modeling of these systems. In new plants, such as AP1000 plant, the RPS is based on digital technology. Digital systems are more complex combinations of hardware components and software. This combination of complex hardware and software can result in the presence of faults and failure modes unique to a digital RPS. The United States Nuclear Regulatory Commission (NRC) is currently performing research on the development of probabilistic models for digital systems for inclusion in PRAs; however, no consensus methodology exists at this time. Westinghouse is currently updating the AP1000 plant PRA to support initial operation of plants currently under construction in the United States. The digital RPS is modeled using fault tree methodology similar to that used for analog based systems. This paper presents high level descriptions of a typical analog based RPS and of the AP1000 plant digital RPS. Application of current fault

  4. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  5. Methodological challenges to bridge the gap between regional climate and hydrology models

    Science.gov (United States)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido

    2017-04-01

    The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the

  6. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    Energy Technology Data Exchange (ETDEWEB)

    Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))

    2009-11-15

    Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k{sub t} close to 1.2 with a density term (alpha{sub 2d}) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k{sub t} close to 3 and a density term (alpha{sub 2d}) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k{sub t} equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k{sub t} = 1.2) on one side, to

  7. A Goal based methodology for HAZOP analysis

    DEFF Research Database (Denmark)

    Rossing, Netta Liin; Lind, Morten; Jensen, Niels

    2010-01-01

    This paper presents a goal based methodology for HAZOP studies in which a functional model of the plant is used to assist in a functional decomposition of the plant starting from the purpose of the plant and continuing down to the function of a single node, e.g. a pipe section. This approach lead...

  8. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  9. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  10. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  11. An optimisation methodology of artificial neural network models for predicting solar radiation: a case study

    Science.gov (United States)

    Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar

    2016-02-01

    The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.

  12. Development of a new damage function model for power plants: Methodology and applications

    International Nuclear Information System (INIS)

    Levy, J.I.; Hammitt, J.K.; Yanagisawa, Y.; Spengler, J.D.

    1999-01-01

    Recent models have estimated the environmental impacts of power plants, but differences in assumptions and analytical methodologies have led to diverging findings. In this paper, the authors present a new damage function model that synthesizes previous efforts and refines components that have been associated with variations in impact estimates. Their model focuses on end-use emissions and quantified the direct human health impacts of criteria air pollutants. To compare their model to previous efforts and to evaluate potential policy applications, the authors assess the impacts of an oil and natural gas-fueled cogeneration power plant in Boston, MA. Impacts under baseline assumptions are estimated to be $0.007/kWh of electricity, $0.23/klb of steam, and $0.004/ton-h of chilled water (representing 2--9% of the market value of outputs). Impacts are largely related to ozone (48%) and particulate matter (42%). Addition of upstream emissions and nonpublic health impacts increases externalities by as much as 50%. Sensitivity analyses demonstrate the importance of plant siting, meteorological conditions, epidemiological assumptions, and the monetary value placed on premature mortality as well as the potential influence of global warming. Comparative analyses demonstrate that their model provides reasonable impact estimates and would therefore be applicable in a broad range of policy settings

  13. MODELING OF EXTRUSION PROCESS USING RESPONSE SURFACE METHODOLOGY AND ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    NEELAM SHIHANI

    2006-06-01

    Full Text Available Artificial neural networks are a powerful tool for modeling of extrusion processing of food materials. Wheat flour and wheat– black soybean blend (95:5 were extruded in a single screw Brabender extruder with varying temperature (120 and 140 oC, dry basis moisture content (18 and 20% and screw speed (156, 168, 180, 192 and 204 rpm. The specific mechanical energy, water absorption index, water solubility index, expansion ratio and sensory characteristics (crispness, hardness, appearance and overall acceptability were measured. Well expanded products could be obtained from wheat flour as well as the blend of wheat– black soybean. The results showed that artificial neural network (ANN models performed better than the response surface methodology (RSM models in describing the extrusion process and characteristics of the extruded product in terms of specific mechanical energy requirement, expansion ratio, water absorption index, water solubility index as well the sensory characteristics. The ANN models were better than RSM models both in case of the individual as well as the pooled data of wheat flour and wheat- black soybean extrusion.

  14. Testing the methodology for site descriptive modelling. Application for the Laxemar area

    International Nuclear Information System (INIS)

    Andersson, Johan; Berglund, Johan; Follin, Sven; Hakami, Eva; Halvarson, Jan; Hermanson, Jan; Laaksoharju, Marcus; Rhen, Ingvar; Wahlgren, C.H.

    2002-08-01

    A special project has been conducted where the currently available data from the Laxemar area, which is part of the Simpevarp site, have been evaluated and interpreted into a Site Descriptive Model covering: geology, hydrogeology, hydrogeochemistry and rock mechanics. Description of the surface ecosystem has been omitted, since it was re-characterised in another, parallel, project. Furthermore, there has been no evaluation of transport properties. The project is primarily a methodology test. The lessons learnt will be implemented in the Site Descriptive Modelling during the coming site investigation. The intent of the project has been to explore whether available methodology for Site Descriptive Modelling based on surface and borehole data is adequate and to identify potential needs for development and improvement in the methodology. The project has developed, with limitations in scope, a Site Descriptive Model in local scale, corresponding to the situation after completion of the Initial Site Investigations for the Laxemar area (i.e. 'version 1.2' using the vocabulary of the general execution program for the site investigations). The Site Descriptive Model should be reasonable, but should not be regarded as a 'real' model. There are limitations both in input data and in the scope of the analysis. The measured (primary) data constitute a wide range of different measurement results including data from two deep core drilled boreholes. These data both need to be checked for consistency and to be interpreted into a format more amenable for three-dimensional modelling. Examples of such evaluations are estimation of surface geology, lineament interpretation, geological single hole interpretation, hydrogeological single hole interpretation and assessment of hydrogeochemical data. Furthermore, while cross discipline interpretation is encouraged there is also a need for transparency. This means that the evaluations first are made within each discipline and after this

  15. A Model-Based Prognostics Methodology For Electrolytic Capacitors Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  16. Integrated Methodology for Information System Change Control Based on Enterprise Architecture Models

    Directory of Open Access Journals (Sweden)

    Pirta Ruta

    2015-12-01

    Full Text Available The information system (IS change management and governance, according to the best practices, are defined and described in several international methodologies, standards, and frameworks (ITIL, COBIT, ValIT etc.. These methodologies describe IS change management aspects from the viewpoint of their particular enterprise resource management area. The areas are mainly viewed in a partly isolated environment, and the integration of the existing methodologies is insufficient for providing unified and controlled methodological support for holistic IS change management. In this paper, an integrated change management methodology is introduced. The methodology consists of guidelines for IS change control by integrating the following significant resource management areas – information technology (IT governance, change management and enterprise architecture (EA change management. In addition, the methodology includes lists of controls applicable at different phases. The approach is based on re-use and fusion of principles used by related methodologies as well as on empirical observations about typical IS change management mistakes in enterprises.

  17. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for elec- trolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  18. Starting with ABC and Finishing with XYZ: What Financial Reporting Model Best Fits a Faculty and Why?

    Science.gov (United States)

    Berry, Prudence Jane

    2014-01-01

    This article looks at the range of financial reporting models available for use in the Australian higher education sector, the possible application of activity-based costing (ABC) in faculties and the eventual rejection of ABC in favour of a more qualitative model designed specifically for use in one institution, in a particular Faculty. The…

  19. Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling

    Science.gov (United States)

    Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom

    2018-03-01

    Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.

  20. Review of Project SAFE: Comments on biosphere conceptual model description and risk assessment methodology

    International Nuclear Information System (INIS)

    Klos, Richard; Wilmot, Roger

    2002-09-01

    The Swedish Nuclear Fuel and Waste Management Company's (SKB's) most recent assessment of the safety of the Forsmark repository for low-level and intermediate-level waste (Project SAFE) is currently undergoing review by the Swedish regulators. As part of its review, the Swedish Radiation Protection Institute (SSI) identified that two components of SAFE require more detailed review: (i) the conceptual model description of the biosphere system, and (ii) SKB's risk assessment methodology. We have reviewed the biosphere system interaction matrix and how this has been used in the identification, justification and description of biosphere models for radiological assessment purposes. The risk assessment methodology has been reviewed considering in particular issues associated with scenario selection, assessment timescale, and the probability and risk associated with the well scenario. There is an extensive range of supporting information on which biosphere modelling in Project SAFE is based. However, the link between this material and the biosphere models themselves is not clearly set out. This leads to some contradictions and mis-matches between description and implementation. One example concerns the representation of the geosphere-biosphere interface. The supporting description of lakes indicates that interaction between groundwaters entering the biosphere through lake bed sediments could lead to accumulations of radionuclides in sediments. These sediments may become agricultural areas at some time in the future. In the numerical modelling of the biosphere carried out in Project SAFE, the direct accumulation of contaminants in bed sediments is not represented. Application of a more rigorous procedure to ensure numerical models are fit for purpose is recommended, paying more attention to issues associated with the geosphere-biosphere interface. A more structured approach to risk assessment would be beneficial, with a better explanation of the difference between

  1. Review of Project SAFE: Comments on biosphere conceptual model description and risk assessment methodology

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard; Wilmot, Roger [Galson Sciences Ltd (United Kingdom)

    2002-09-01

    The Swedish Nuclear Fuel and Waste Management Company's (SKB's) most recent assessment of the safety of the Forsmark repository for low-level and intermediate-level waste (Project SAFE) is currently undergoing review by the Swedish regulators. As part of its review, the Swedish Radiation Protection Institute (SSI) identified that two components of SAFE require more detailed review: (i) the conceptual model description of the biosphere system, and (ii) SKB's risk assessment methodology. We have reviewed the biosphere system interaction matrix and how this has been used in the identification, justification and description of biosphere models for radiological assessment purposes. The risk assessment methodology has been reviewed considering in particular issues associated with scenario selection, assessment timescale, and the probability and risk associated with the well scenario. There is an extensive range of supporting information on which biosphere modelling in Project SAFE is based. However, the link between this material and the biosphere models themselves is not clearly set out. This leads to some contradictions and mis-matches between description and implementation. One example concerns the representation of the geosphere-biosphere interface. The supporting description of lakes indicates that interaction between groundwaters entering the biosphere through lake bed sediments could lead to accumulations of radionuclides in sediments. These sediments may become agricultural areas at some time in the future. In the numerical modelling of the biosphere carried out in Project SAFE, the direct accumulation of contaminants in bed sediments is not represented. Application of a more rigorous procedure to ensure numerical models are fit for purpose is recommended, paying more attention to issues associated with the geosphere-biosphere interface. A more structured approach to risk assessment would be beneficial, with a better explanation of the difference

  2. Digital System Categorization Methodology to Support Integration of Digital Instrumentation and Control Models into PRAs

    International Nuclear Information System (INIS)

    Arndt, Steven A.

    2011-01-01

    It has been suggested that by categorizing the various digital systems used in safety critical applications in nuclear power plants, it would be possible to determine which systems should be modeled in the analysis of the larger plant wide PRA, at what level of detail the digital system should be modeled and using which methods. The research reported in this paper develops a categorization method using system attributes to permit a modeler to more effectively model the systems that will likely have the most critical contributions to the overall plant safety and to more effectively model system interactions for those digital systems where the interactions are most important to the overall accuracy and completeness of the plant PRA. The proposed methodology will categorize digital systems based on certain attributes of the systems themselves and how they will be used in the specific application. This will help determine what digital systems need to be modeled and at what level of detail, and can be used to guide PRA analysis and regulatory reviews. The three-attribute categorization strategy that was proposed by Arndt is used as the basis for the categorization methodology developed here. The first attribute, digital system complexity, is based on Type Il interactions defined by Aldemir and an overall digital system size and complexity index. The size and complexity index used are previously defined software complexity metrics. Potential sub-attributes of digital system complexity include, design complexity, software complexity, hardware complexity, system function complexity and testability. The second attribute, digital system interactions/inter-conductivity, is a combination of Rushby's coupling and Ademir's Type I interactions. Digital systems that are loosely coupled and/or have very few Type I interaction would not interact dynamically with the overall system and would have a low interactions/inter-conductivity score. Potential sub-attributes of digital system

  3. Digital System Categorization Methodology to Support Integration of Digital Instrumentation and Control Models into PRAs

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, Steven A. [U.S. Nuclear Regulatory Commission, Washington D.C. (United States)

    2011-08-15

    It has been suggested that by categorizing the various digital systems used in safety critical applications in nuclear power plants, it would be possible to determine which systems should be modeled in the analysis of the larger plant wide PRA, at what level of detail the digital system should be modeled and using which methods. The research reported in this paper develops a categorization method using system attributes to permit a modeler to more effectively model the systems that will likely have the most critical contributions to the overall plant safety and to more effectively model system interactions for those digital systems where the interactions are most important to the overall accuracy and completeness of the plant PRA. The proposed methodology will categorize digital systems based on certain attributes of the systems themselves and how they will be used in the specific application. This will help determine what digital systems need to be modeled and at what level of detail, and can be used to guide PRA analysis and regulatory reviews. The three-attribute categorization strategy that was proposed by Arndt is used as the basis for the categorization methodology developed here. The first attribute, digital system complexity, is based on Type Il interactions defined by Aldemir and an overall digital system size and complexity index. The size and complexity index used are previously defined software complexity metrics. Potential sub-attributes of digital system complexity include, design complexity, software complexity, hardware complexity, system function complexity and testability. The second attribute, digital system interactions/inter-conductivity, is a combination of Rushby's coupling and Ademir's Type I interactions. Digital systems that are loosely coupled and/or have very few Type I interaction would not interact dynamically with the overall system and would have a low interactions/inter-conductivity score. Potential sub-attributes of

  4. Risk methodology for geologic disposal of radioactive waste: asymptotic properties of the environmental transport model

    International Nuclear Information System (INIS)

    Helton, J.C.; Brown, J.B.; Iman, R.L.

    1981-02-01

    The Environmental Transport Model is a compartmental model developed to represent the surface movement of radionuclides. The purpose of the present study is to investigate the asymptotic behavior of the model and to acquire insight with respect to such behavior and the variables which influence it. For four variations of a hypothetical river receiving a radionuclide discharge, the following properties are considered: predicted asymptotic values for environmental radionuclide concentrations and time required for environmental radionuclide concentrations to reach 90% of their predicted asymptotic values. Independent variables of two types are used to define each variation of the river: variables which define physical properties of the river system (e.g., soil depth, river discharge and sediment resuspension) and variables which summarize radionuclide properties (i.e., distribution coefficients). Sensitivity analysis techniques based on stepwise regression are used to determine the dominant variables influencing the behavior of the model. This work constitutes part of a project at Sandia National Laboratories funded by the Nuclear Regulatory Commission to develop a methodology to assess the risk associated with geologic disposal of radioactive waste

  5. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Science.gov (United States)

    Izzuddin, Nur; Sunarsih, Priyanto, Agoes

    2015-05-01

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel's speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel's speed to obtain better characteristics and hence optimize the fuel saving rate.

  6. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    International Nuclear Information System (INIS)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes

    2015-01-01

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate

  7. Methodological issues in cardiovascular epidemiology: the risk of determining absolute risk through statistical models

    Directory of Open Access Journals (Sweden)

    Demosthenes B Panagiotakos

    2006-09-01

    Full Text Available Demosthenes B Panagiotakos, Vassilis StavrinosOffice of Biostatistics, Epidemiology, Department of Dietetics, Nutrition, Harokopio University, Athens, GreeceAbstract: During the past years there has been increasing interest in the development of cardiovascular disease functions that predict future events at individual level. However, this effort has not been so far very successful, since several investigators have reported large differences in the estimation of the absolute risk among different populations. For example, it seems that predictive models that have been derived from US or north European populations  overestimate the incidence of cardiovascular events in south European and Japanese populations. A potential explanation could be attributed to several factors such as geographical, cultural, social, behavioral, as well as genetic variations between the investigated populations in addition to various methodological, statistical, issues relating to the estimation of these predictive models. Based on current literature it can be concluded that, while risk prediction of future cardiovascular events is a useful tool and might be valuable in controlling the burden of the disease in a population, further work is required to improve the accuracy of the present predictive models.Keywords: cardiovascular disease, risk, models

  8. Development of a system dynamics model based on Six Sigma methodology

    Directory of Open Access Journals (Sweden)

    José Jovani Cardiel Ortega

    2017-01-01

    Full Text Available A dynamic model to analyze the complexity associated with the manufacturing systems and to improve the performance of the process through the Six Sigma philosophy is proposed. The research focuses on the implementation of the system dynamics tool to comply with each of the phases of the DMAIC methodology. In the first phase, define, the problem is articulated, collecting data, selecting the variables, and representing them in a mental map that helps build the dynamic hypothesis. In the second phase, measure, model is formulated, equations are developed, and Forrester diagram is developed to carry out the simulation. In the third phase, analyze, the simulation results are studied. For the fourth phase, improving, the model is validated through a sensitivity analysis. Finally, in control phase, operation policies are proposed. This paper presents the development of a dynamic model of the system of knitted textile production knitted developed; the implementation was done in a textile company in southern Guanajuato. The results show an improvement in the process performance by increasing the level of sigma allowing the validation of the proposed approach.

  9. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Energy Technology Data Exchange (ETDEWEB)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes [Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia)

    2015-05-15

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

  10. ABOUT THE RELEVANCE AND METHODOLOGY ASPECTS OF TEACHING THE MATHEMATICAL MODELING TO PEDAGOGICAL STUDENTS

    Directory of Open Access Journals (Sweden)

    Y. A. Perminov

    2014-01-01

    Full Text Available The paper substantiates the need for profile training in mathematical modeling for pedagogical students, caused by the total penetration of mathematics into different sciences, including the humanities; fast development of the information communications technologies; and growing importance of mathematical modeling, combining the informal scientific and formal mathematical languages with the unique opportunities of computer programming. The author singles out the reasons for mastering and using the mathematical apparatus by teaches in every discipline. Indeed, among all the modern mathematical methods and ideas, mathematical modeling retains its priority in all professional spheres. Therefore, the discipline of “Mathematical Modeling” can play an important role in integrating different components of specialists training in various profiles. By mastering the basics of mathematical modeling, students acquire skills of methodological thinking; learn the principles of analysis, synthesis, generalization of ideas and methods in different disciplines and scientific spheres; and achieve general culture competences. In conclusion, the author recommends incorporating the “Methods of Profile Training in Mathematical Modeling” into the pedagogical magistracy curricula. 

  11. Partial least squares path modeling basic concepts, methodological issues and applications

    CERN Document Server

    Noonan, Richard

    2017-01-01

    This edited book presents the recent developments in partial least squares-path modeling (PLS-PM) and provides a comprehensive overview of the current state of the most advanced research related to PLS-PM. The first section of this book emphasizes the basic concepts and extensions of the PLS-PM method. The second section discusses the methodological issues that are the focus of the recent development of the PLS-PM method. The third part discusses the real world application of the PLS-PM method in various disciplines. The contributions from expert authors in the field of PLS focus on topics such as the factor-based PLS-PM, the perfect match between a model and a mode, quantile composite-based path modeling (QC-PM), ordinal consistent partial least squares (OrdPLSc), non-symmetrical composite-based path modeling (NSCPM), modern view for mediation analysis in PLS-PM, a multi-method approach for identifying and treating unobserved heterogeneity, multigroup analysis (PLS-MGA), the assessment of the common method b...

  12. A case study in data audit and modelling methodology-Australia

    International Nuclear Information System (INIS)

    Apelbaum, John

    2009-01-01

    The purpose of the paper is to outline a rigorous, spatially consistent and cost-effective transport planning tool that projects travel demand, energy and emissions for all modes associated with domestic and international transport. The planning tool (Aus e Tran) is a multi-modal, multi-fuel and multi-regional macroeconomic and demographic-based computational model of the Australian transport sector that overcomes some of the gaps associated with existing strategic level transport emission models. The paper also identifies a number of key data issues that need to be resolved prior to model development with particular reference to the Australian environment. The strategic model structure endogenously derives transport demand, energy and emissions by jurisdiction, vehicle type, emission type and transport service for both freight and passenger transport. Importantly, the analytical framework delineates the national transport task, energy consumed and emissions according to region, state/territory of origin and jurisdictional protocols, provides an audit mechanism for the evaluation of the methodological framework, integrates a mathematical protocol to derive time series FFC emission factors and allows for the impact of non-registered road vehicles on transport, fuel and emissions.

  13. Modeling and process optimization of electrospinning of chitosan-collagen nanofiber by response surface methodology

    Science.gov (United States)

    Amiri, Nafise; Moradi, Ali; Abolghasem Sajjadi Tabasi, Sayyed; Movaffagh, Jebrail

    2018-04-01

    Chitosan-collagen composite nanofiber is of a great interest to researchers in biomedical fields. Since the electrospinning is the most popular method for nanofiber production, having a comprehensive knowledge of the electrospinning process is beneficial. Modeling techniques are precious tools for managing variables in the electrospinning process, prior to the more time- consuming and expensive experimental techniques. In this study, a central composite design of response surface methodology (RSM) was employed to develop a statistical model as well as to define the optimum condition for fabrication of chitosan-collagen nanofiber with minimum diameter. The individual and the interaction effects of applied voltage (10–25 kV), flow rate (0.5–1.5 mL h‑1), and needle to collector distance (15–25 cm) on the fiber diameter were investigated. ATR- FTIR and cell study were done to evaluate the optimized nanofibers. According to the RSM, a two-factor interaction (2FI) model was the most suitable model. The high regression coefficient value (R 2 ≥ 0.9666) of the fitted regression model and insignificant lack of fit (P = 0.0715) indicated that the model was highly adequate in predicting chitosan-collagen nanofiber diameter. The optimization process showed that the chitosan-collagen nanofiber diameter of 156.05 nm could be obtained in 9 kV, 0.2 ml h‑1, and 25 cm which was confirmed by experiment (155.92 ± 18.95 nm). The ATR-FTIR and cell study confirmed the structure and biocompatibility of the optimized membrane. The represented model could assist researchers in fabricating chitosan-collagen electrospun scaffolds with a predictable fiber diameter, and optimized chitosan-collagen nanofibrous mat could be a potential candidate for wound healing and tissue engineering.

  14. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  15. Cognitive models of executive functions development: methodological limitations and theoretical challenges

    Directory of Open Access Journals (Sweden)

    Florencia Stelzer

    2014-01-01

    Full Text Available Executive functions (EF have been defined as a series of higher-order cognitive processes which allow the control of thought, behavior and affection according to the achievement of a goal. Such processes present a lengthy postnatal development which matures completely by the end of adolescence. In this article we make a review of some of the main models of EF development during childhood. The aim of this work is to describe the state of the art related to the topic, identifying the main theoretical difficulties and methodological limitations associated with the different proposed paradigms. Finally, some suggestions are given to cope with such difficulties, emphasizing that the development of an ontology of EF could be a viable alternative to counter them. We believe that futture researches should guide their efforts toward the development of that ontology.

  16. Investigating Communicative Models in French as a Foreign Language Classroom: Methodological Issues

    Directory of Open Access Journals (Sweden)

    Yiboe Kofi Tsivanyo

    2013-10-01

    Full Text Available This paper outlines some methodological challenges in investigating communicative models of teachers and students in French language classroom in some Senior High Schools in the Cape Coast metropolis - Ghana. The data collection procedure for this study focused on natural setting, use of objective views on the Ghanaian belief systems in the investigation process in order to structure the research and to avoid manipulating the study variables. The database consisted of classroom activities as well as extensive interviews with some old students on a year abroad linguistic programme in University of Strasbourg, France. The results showed that language usage in the French classroom was controlled by teachers. However, strategies used by teachers could contribute to effective language teaching if cultural dimensions were taken into consideration.

  17. Testing the methodology for site descriptive modelling. Application for the Laxemar area

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan [JA Streamflow AB, Aelvsjoe (Sweden); Berglund, Johan [SwedPower AB, Stockholm (Sweden); Follin, Sven [SF Geologic AB, Stockholm (Sweden); Hakami, Eva [Itasca Geomekanik AB, Stockholm (Sweden); Halvarson, Jan [Swedish Nuclear Fuel and Waste Management Co, Stockholm (Sweden); Hermanson, Jan [Golder Associates AB, Stockholm (Sweden); Laaksoharju, Marcus [Geopoint (Sweden); Rhen, Ingvar [Sweco VBB/VIAK, Stockholm (Sweden); Wahlgren, C.H. [Sveriges Geologiska Undersoekning, Uppsala (Sweden)

    2002-08-01

    A special project has been conducted where the currently available data from the Laxemar area, which is part of the Simpevarp site, have been evaluated and interpreted into a Site Descriptive Model covering: geology, hydrogeology, hydrogeochemistry and rock mechanics. Description of the surface ecosystem has been omitted, since it was re-characterised in another, parallel, project. Furthermore, there has been no evaluation of transport properties. The project is primarily a methodology test. The lessons learnt will be implemented in the Site Descriptive Modelling during the coming site investigation. The intent of the project has been to explore whether available methodology for Site Descriptive Modelling based on surface and borehole data is adequate and to identify potential needs for development and improvement in the methodology. The project has developed, with limitations in scope, a Site Descriptive Model in local scale, corresponding to the situation after completion of the Initial Site Investigations for the Laxemar area (i.e. 'version 1.2' using the vocabulary of the general execution program for the site investigations). The Site Descriptive Model should be reasonable, but should not be regarded as a 'real' model. There are limitations both in input data and in the scope of the analysis. The measured (primary) data constitute a wide range of different measurement results including data from two deep core drilled boreholes. These data both need to be checked for consistency and to be interpreted into a format more amenable for three-dimensional modelling. Examples of such evaluations are estimation of surface geology, lineament interpretation, geological single hole interpretation, hydrogeological single hole interpretation and assessment of hydrogeochemical data. Furthermore, while cross discipline interpretation is encouraged there is also a need for transparency. This means that the evaluations first are made within each discipline

  18. Integrated active and passive control design methodology for the LaRC CSI evolutionary model

    Science.gov (United States)

    Voth, Christopher T.; Richards, Kenneth E., Jr.; Schmitz, Eric; Gehling, Russel N.; Morgenthaler, Daniel R.

    1994-01-01

    A general design methodology to integrate active control with passive damping was demonstrated on the NASA LaRC CSI Evolutionary Model (CEM), a ground testbed for future large, flexible spacecraft. Vibration suppression controllers designed for Line-of Sight (LOS) minimization were successfully implemented on the CEM. A frequency-shaped H2 methodology was developed, allowing the designer to specify the roll-off of the MIMO compensator. A closed loop bandwidth of 4 Hz, including the six rigid body modes and the first three dominant elastic modes of the CEM was achieved. Good agreement was demonstrated between experimental data and analytical predictions for the closed loop frequency response and random tests. Using the Modal Strain Energy (MSE) method, a passive damping treatment consisting of 60 viscoelastically damped struts was designed, fabricated and implemented on the CEM. Damping levels for the targeted modes were more than an order of magnitude larger than for the undamped structure. Using measured loss and stiffness data for the individual damped struts, analytical predictions of the damping levels were very close to the experimental values in the (1-10) Hz frequency range where the open loop model matched the experimental data. An integrated active/passive controller was successfully implemented on the CEM and was evaluated against an active-only controller. A two-fold increase in the effective control bandwidth and further reductions of 30 percent to 50 percent in the LOS RMS outputs were achieved compared to an active-only controller. Superior performance was also obtained compared to a High-Authority/Low-Authority (HAC/LAC) controller.

  19. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  20. A methodological proposal to contribute to the development of research skills in science education to start the design of a didactic unit built on foundations of scientific and technological literacy

    Directory of Open Access Journals (Sweden)

    Andrés Felipe Velásquez Mosquera

    2013-10-01

    Full Text Available This paper seeks to promote a discussion of the need to promote the training of investigative skills in students of natural sciences from a methodology structured from the design of the plan of course, including a didactic unit, based on scientific and technological literacy to. It is the result of several years of experience in teaching and research of the author in the field of the didactics of the sciences. 

  1. Application of Binomial Model and Market Asset Declaimer Methodology for Valuation of Abandon and Expand Options. The Case Study

    Directory of Open Access Journals (Sweden)

    Paweł Mielcarz

    2007-06-01

    Full Text Available The article presents a case study of valuation of real options included in a investment project. The main goal of the article is to present the calculation and methodological issues of application the methodology for real option valuation. In order to do it there are used the binomial model and Market Asset Declaimer methodology. The project presented in the article concerns the introduction of radio station to a new market. It includes two valuable real options: to abandon the project and to expand.

  2. A novel methodology to model the cooling processes of packed horticultural produce using 3D shape models

    Science.gov (United States)

    Gruyters, Willem; Verboven, Pieter; Rogge, Seppe; Vanmaercke, Simon; Ramon, Herman; Nicolai, Bart

    2017-10-01

    Freshly harvested horticultural produce require a proper temperature management to maintain their high economic value. Towards this end, low temperature storage is of crucial importance to maintain a high product quality. Optimizing both the package design of packed produce and the different steps in the postharvest cold chain can be achieved by numerical modelling of the relevant transport phenomena. This work presents a novel methodology to accurately model both the random filling of produce in a package and the subsequent cooling process. First, a cultivar-specific database of more than 100 realistic CAD models of apple and pear fruit is built with a validated geometrical 3D shape model generator. To have an accurate representation of a realistic picking season, the model generator also takes into account the biological variability of the produce shape. Next, a discrete element model (DEM) randomly chooses surface meshed bodies from the database to simulate the gravitational filling process of produce in a box or bin, using actual mechanical properties of the fruit. A computational fluid dynamics (CFD) model is then developed with the final stacking arrangement of the produce to study the cooling efficiency of packages under several conditions and configurations. Here, a typical precooling operation is simulated to demonstrate the large differences between using actual 3D shapes of the fruit and an equivalent spheres approach that simplifies the problem drastically. From this study, it is concluded that using a simplified representation of the actual fruit shape may lead to a severe overestimation of the cooling behaviour.

  3. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  4. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  5. Towards a sharp-interface volume-of-fluid methodology for modeling evaporation

    Science.gov (United States)

    Pathak, Ashish; Raessi, Mehdi

    2017-11-01

    In modeling evaporation, the diffuse-interface (one-domain) formulation yields inaccurate results. Recent efforts approaching the problem via a sharp-interface (two-domain) formulation have shown significant improvements. The reasons behind their better performance are discussed in the present work. All available sharp-interface methods, however, exclusively employ the level-set. In the present work, we develop a sharp-interface evaporation model in a volume-of-fluid (VOF) framework in order to leverage its mass-conserving property as well as its ability to handle large topographical changes. We start with a critical review of the assumptions underlying the mathematical equations governing evaporation. For example, it is shown that the assumption of incompressibility can only be applied in special circumstances. The famous D2 law used for benchmarking is valid exclusively to steady-state test problems. Transient is present over significant lifetime of a micron-size droplet. Therefore, a 1D spherical fully transient model is developed to provide a benchmark transient solution. Finally, a 3D Cartesian Navier-Stokes evaporation solver is developed. Some preliminary validation test-cases are presented for static and moving drop evaporation. This material is based upon work supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy and the Department of Defense, Tank and Automotive Research, Development, and Engineering Center, under Award Number DEEE0007292.

  6. The dynamics of structures - Necessity and methodology for amendment by comparing the calculated model with experimental model

    International Nuclear Information System (INIS)

    Caneparo, B.; Zirilli, S.

    1987-01-01

    In this work relating to support structures for seismic tests, the authors present a mixed procedure necessitating the experimental measurement of natural frequencies, dampings, and the response to impulse stresses (in the case of a seismic stress, the subject of this study, a single impulse is sufficient) in the zone in question. Experimental measurements are used to adjust the finite elements model; it may then be used for later studies. In the presence of interaction with structures not included in the model, such as, for example, the means used for the actual test, it is impossible to adjust it according to the methods proposed and it is up to the experienced author to introduce the modifications judged opportune to take into account everything which is not a part of the model. The authors have, however, carried out a programme based on the local modification of Young's module, which uses only natural frequencies, useful in the adjustment process. Once the zone of poor modelling has been found, this programme enables optimizing the value of E as a function of the experimental data, whilst also furnishing an estimate of residual differences. Dynamic tests have shown that the model thus obtained can be refined by the forced impulse to an impulse stress. In addition to setting out the theories and formulae used, we then give account of verification of the methodology using a plate, and of its application to a support structure in the form of a frame for seismic tests. The appendices include both experimental measurements and tests. The authors carried out the modal analysis with even greater care than necessary in view of the methodology verification phase

  7. A Model-Based Methodology for Simultaneous Design and Control of a Bioethanol Production Process

    DEFF Research Database (Denmark)

    Alvarado-Morales, Merlin; Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan

    2010-01-01

    In this work, a framework for the simultaneous solution of design and control problems is presented. Within this framework, two methodologies are presented, the integrated process design and controller design (IPDC) methodology and the process-group contribution (PGC) methodology. The concepts...... of attainable region (AR), driving force (DF), process-group (PG) and reverse simulation are used within these methodologies. The IPDC methodology is used to find the optimal design-control strategy of a process by locating the maximum point in the AR and DF diagrams for reactor and separator, respectively....... The PGC methodology is used to generate more efficient separation designs in terms of energy consumption by targeting the separation task at the largest DF. Both methodologies are highlighted through the application of two case studies, a bioethanol production process and a succinic acid production...

  8. MODELING AND STRUCTURING OF ENTERPRISE MANAGEMENT SYSTEM RESORT SPHERE BASED ON ELEMENTS OF NEURAL NETWORK THEORY: THE METHODOLOGICAL BASIS

    Directory of Open Access Journals (Sweden)

    Rena R. Timirualeeva

    2015-01-01

    Full Text Available The article describes the methodology of modeling andstructuring of business networks theory. Accounting ofenvironmental factors mega-, macro- and mesolevels, theinternal state of the managed system and the error management command execution by control system implemented inthis. The proposed methodology can improve the quality of enterprise management of resort complex through a moreflexible response to changes in the parameters of the internaland external environments.

  9. Methodology study for documentation and 3D modelling of blast induced fractures

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, Mats (Swebrec - Swedish Blasting Research Centre, Luleaa (Sweden)); Markstroem, Ingemar; Pettersson, Anders (Golder Associates (Sweden))

    2008-05-15

    The purpose of this activity as part of the Zuse project was to test whether it is possible to produce a 3D model of blast induced fractures around a tunnel and also to find a methodology suitable for large scale studies. The purpose of the studies is to increase the understanding of the excavation damage zone (EDZ) and the possibility of an existing continuous EDZ along the tunnel. For the investigation, an old test area in the Q tunnel at the Aespoe Hard Rock Laboratory was selected, where slabs were excavated in 2003 to investigate the fracture pattern around the contour holes of a blasted tunnel. The rock walls of the excavated niche were studied and documented in the tunnel, while the excavated rock slabs were documented above ground. The work flow included photo documentation of both sides. The photos taken in the tunnel had to be rectified and then the fractures were vectorized automatically in a vectorization program, generating AutoCad DWG-files as output. The vectorized fractures were then moved to MicroStation/RVS where they were interpreted and connected into continuous line strings. The digitized slab and rock sides were then moved to the correct position in 3D space. Finally, a 3D model was made in RVS where the fracture traces were connected into undulating fracture planes in 3D. The conclusion is that it is possible to build a 3D model; the model is presented in Chapter 3.5. However, the age and condition of the slabs may have influenced the quality of the model in this study. The quality of a model that can be built in a future investigation, should be much better if the surveys are adapted to the investigation at hand and the slabs and rock sides are fresh and in better condition. The validity of a model depends on the density of the investigation data. There is also always a risk of over interpretation; the wish to identify a fracture from one section to the next can lead to an interpretation of the fractures as more persistent than they actually

  10. Methodology study for documentation and 3D modelling of blast induced fractures

    International Nuclear Information System (INIS)

    Olsson, Mats; Markstroem, Ingemar; Pettersson, Anders

    2008-05-01

    The purpose of this activity as part of the Zuse project was to test whether it is possible to produce a 3D model of blast induced fractures around a tunnel and also to find a methodology suitable for large scale studies. The purpose of the studies is to increase the understanding of the excavation damage zone (EDZ) and the possibility of an existing continuous EDZ along the tunnel. For the investigation, an old test area in the Q tunnel at the Aespoe Hard Rock Laboratory was selected, where slabs were excavated in 2003 to investigate the fracture pattern around the contour holes of a blasted tunnel. The rock walls of the excavated niche were studied and documented in the tunnel, while the excavated rock slabs were documented above ground. The work flow included photo documentation of both sides. The photos taken in the tunnel had to be rectified and then the fractures were vectorized automatically in a vectorization program, generating AutoCad DWG-files as output. The vectorized fractures were then moved to MicroStation/RVS where they were interpreted and connected into continuous line strings. The digitized slab and rock sides were then moved to the correct position in 3D space. Finally, a 3D model was made in RVS where the fracture traces were connected into undulating fracture planes in 3D. The conclusion is that it is possible to build a 3D model; the model is presented in Chapter 3.5. However, the age and condition of the slabs may have influenced the quality of the model in this study. The quality of a model that can be built in a future investigation, should be much better if the surveys are adapted to the investigation at hand and the slabs and rock sides are fresh and in better condition. The validity of a model depends on the density of the investigation data. There is also always a risk of over interpretation; the wish to identify a fracture from one section to the next can lead to an interpretation of the fractures as more persistent than they actually

  11. Study on an ISO 15926 based data modeling methodology for nuclear power industry

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Yang Ho; Park, Byeong Ho; Park, Seong Chan; Kim, Eun Kee [KEPCO E-C, Yongin (Korea, Republic of)

    2014-10-15

    The scope is therefore data integration and data to support the whole life of a plant. This representation is specified by a generic, conceptual Data Model (DM) that is independent of any particular application, but that is able to record data from the applications used in plant design, fabrication and operation. The data model is designed to be used in conjunction with Reference Data (RD): standard instances of the DM that represent information common to a number of users, plants, or both. This paper introduces a high level description of the structure of ISO 15926 and how this can be adapted to the nuclear power plant industry in particular. This paper introduces ISO 15926 methodology and how to extend the existing RDL for nuclear power industry. As the ISO 15926 representation is independent of applications, interfaces to existing or future applications have to be developed. Such interfaces are provided by Templates that takes input from external sources and 'lifts' it into an ISO 15926 repository, and/or 'lowers' the data into other applications. This is a similar process to the process defined by W3C. Data exchange can be done using e.g. XML messages, but the modelling is independent of technology used for the exchange.

  12. The Cultural Analysis of Soft Systems Methodology and the Configuration Model of Organizational Culture

    Directory of Open Access Journals (Sweden)

    Jürgen Staadt

    2015-06-01

    Full Text Available Organizations that find themselves within a problematic situation connected with cultural issues such as politics and power require adaptable research and corresponding modeling approaches so as to grasp the arrangements of that situation and their impact on the organizational development. This article originates from an insider-ethnographic intervention into the problematic situation of the leading public housing provider in Luxembourg. Its aim is to describe how the more action-oriented cultural analysis of soft systems methodology and the theory-driven configuration model of organizational culture are mutually beneficial rather than contradictory. The data collected between 2007 and 2013 were analyzed manually as well as by means of ATLAS.ti. Results demonstrate that the cultural analysis enables an in-depth understanding of the power-laden environment within the organization bringing about the so-called “socio-political system” and that the configuration model makes it possible to depict the influence of that system on the whole organization. The overall research approach thus contributes toward a better understanding of the influence and the impact of oppressive social environments and evolving power relations on the development of an organization.

  13. Climate Change Modeling Methodology Selected Entries from the Encyclopedia of Sustainability Science and Technology

    CERN Document Server

    2012-01-01

    The Earth's average temperature has risen by 1.4°F over the past century, and computer models project that it will rise much more over the next hundred years, with significant impacts on weather, climate, and human society. Many climate scientists attribute these increases to the buildup of greenhouse gases produced by the burning of fossil fuels and to the anthropogenic production of short-lived climate pollutants. Climate Change Modeling Methodologies: Selected Entries from the Encyclopedia of Sustainability Science and Technology provides readers with an introduction to the tools and analysis techniques used by climate change scientists to interpret the role of these forcing agents on climate.  Readers will also gain a deeper understanding of the strengths and weaknesses of these models and how to test and assess them.  The contributions include a glossary of key terms and a concise definition of the subject for each topic, as well as recommendations for sources of more detailed information. Features au...

  14. Alcohol, psychomotor-stimulants and behaviour: methodological considerations in preclinical models of early-life stress.

    Science.gov (United States)

    McDonnell-Dowling, Kate; Miczek, Klaus A

    2018-04-01

    In order to assess the risk associated with early-life stress, there has been an increase in the amount of preclinical studies investigating early-life stress. There are many challenges associated with investigating early-life stress in animal models and ensuring that such models are appropriate and clinically relevant. The purpose of this review is to highlight the methodological considerations in the design of preclinical studies investigating the effects of early-life stress on alcohol and psychomotor-stimulant intake and behaviour. The protocols employed for exploring early-life stress were investigated and summarised. Experimental variables include animals, stress models, and endpoints employed. The findings in this paper suggest that there is little consistency among these studies and so the interpretation of these results may not be as clinically relevant as previously thought. The standardisation of these simple stress procedures means that results will be more comparable between studies and that results generated will give us a more robust understanding of what can and may be happening in the human and veterinary clinic.

  15. Processing of the GALILEO fuel rod code model uncertainties within the AREVA LWR realistic thermal-mechanical analysis methodology

    International Nuclear Information System (INIS)

    Mailhe, P.; Barbier, B.; Garnier, C.; Landskron, H.; Sedlacek, R.; Arimescu, I.; Smith, M.; Bellanger, P.

    2013-01-01

    The availability of reliable tools and associated methodology able to accurately predict the LWR fuel behavior in all conditions is of great importance for safe and economic fuel usage. For that purpose, AREVA has developed its new global fuel rod performance code GALILEO along with its associated realistic thermal-mechanical analysis methodology. This realistic methodology is based on a Monte Carlo type random sampling of all relevant input variables. After having outlined the AREVA realistic methodology, this paper will be focused on the GALILEO code benchmarking process, on its extended experimental database and on the GALILEO model uncertainties assessment. The propagation of these model uncertainties through the AREVA realistic methodology is also presented. This GALILEO model uncertainties processing is of the utmost importance for accurate fuel design margin evaluation as illustrated on some application examples. With the submittal of Topical Report GALILEO to the U.S. NRC in 2013, GALILEO and its methodology are on the way to be industrially used in a wide range of irradiation conditions. (authors)

  16. Starting physiology: bioelectrogenesis.

    Science.gov (United States)

    Baptista, Vander

    2015-12-01

    From a Cartesian perspective of rational analysis, the electric potential difference across the cell membrane is one of the fundamental concepts for the study of physiology. Unfortunately, undergraduate students often struggle to understand the genesis of this energy gradient, which makes the teaching activity a hard task for the instructor. The topic of bioelectrogenesis encompasses multidisciplinary concepts, involves several mechanisms, and is a dynamic process, i.e., it never turns off during the lifetime of the cell. Therefore, to improve the transmission and acquisition of knowledge in this field, I present an alternative didactic model. The design of the model assumes that it is possible to build, in a series of sequential steps, an assembly of proteins within the membrane of an isolated cell in a simulated electrophysiology experiment. Initially, no proteins are inserted in the membrane and the cell is at a baseline energy state; the extracellular and intracellular fluids are at thermodynamic equilibrium. Students are guided through a sequence of four steps that add key membrane transport proteins to the model cell. The model is simple at the start and becomes progressively more complex, finally producing transmembrane chemical and electrical gradients. I believe that this didactic approach helps instructors with a more efficient tool for the teaching of the mechanisms of resting membrane potential while helping students avoid common difficulties that may be encountered when learning this topic. Copyright © 2015 The American Physiological Society.

  17. Prediction model for adult height of small for gestational age children at the start of growth hormone treatment

    NARCIS (Netherlands)

    M.A.J. de Ridder (Maria); Th. Stijnen (Theo); A.C.S. Hokken-Koelega (Anita)

    2008-01-01

    textabstractContext: GH treatment is approved for short children born small for gestational age (SGA). The optimal dose is not yet established. Objective: Our objective was to develop a model for prediction of height at the onset of puberty and of adult height (AH). Design and Setting: Two GH

  18. Prediction model for adult height of small for gestational age children at the start of growth hormone treatment

    NARCIS (Netherlands)

    de Ridder, Maria A. J.; Stijnen, Theo; Hokken-Koelega, Anita C. S.

    Context: GH treatment is approved for short children born small for gestational age (SGA). The optimal dose is not yet established. Objective: Our objective was to develop a model for prediction of height at the onset of puberty and of adult height (AH). Design and Setting: Two GH studies were

  19. A systematic methodology to extend the applicability of a bioconversion model for the simulation of various co-digestion scenarios

    DEFF Research Database (Denmark)

    Kovalovszki, Adam; Alvarado-Morales, Merlin; Fotidis, Ioannis

    2017-01-01

    was extended to various co-digestion scenarios. More specifically, the application of the step-by-step methodology led to the estimation of a general and reduced set of parameters, for the simulation of scenarios where either manure or wastewater were co-digested with different organic substrates. Validation......Detailed simulation of anaerobic digestion (AD) requires complex mathematical models and the optimization of numerous model parameters. By performing a systematic methodology and identifying parameters with the highest impact on process variables in a well-established AD model, its applicability...

  20. Methodological approach for the estimation of a new velocity model for continental Ecuador

    Science.gov (United States)

    Luna, Marco P.; Staller, Alejandra; Toulkeridis, Theofilos; Parra, Humberto

    2017-12-01

    We used 33 stations belonging of the Ecuador Continuous Monitoring GNSS Network (REGME) during the period 2008-2014, with aim to contribute with a methodological approach for the estimation of a new velocity model for Continental Ecuador. We used daily solutions to perform the analysis of GNSS time series, to obtain models of the series that best fit, taking into count the trend, seasonal variations and the type of noise. The sum of all these components represent the real-time series, and thus we can have a better estimation of the velocity parameter and its uncertainty. The velocities were calculated introducing the trend, seasonality and noise that were presented in each series into the overall model, which improved uncertainty by 12% and changed in magnitude up to 1.7 mm/yr and 2.5 mm/yr in the horizontal and vertical components, respectively, with respect to the initial velocities. The velocity field describes the crustal movement of the REGME stations in mainland Ecuador with uncertainty of 1 mm/yr and 2 mm/yr for the horizontal and vertical components, respectively. Finally, a velocity model has been developed using the kriging technique whose geostatistical approach has been based on the data to identify the spatial characteristics by examining the observations by peers. The mean square error (rms) of prediction obtained in this method is 1.78 mm/yr and 1.95 mm/yr in the east and north components, respectivaly. The vertical component could not be modeled due to its chaotic behavior.

  1. FINANCIRANJE START - UP PODJETIJ

    OpenAIRE

    Kraner, Simona

    2016-01-01

    V magistrskem delu ugotavljamo specifike start-up podjetij ter njihov vpliv na dostopnost in primernost posameznih virov financiranja. Ko ugotovimo in razdelamo poglavitne specifike, ki na splošno veljajo za start-up podjetja, se lotimo proučevanja faz financiranja, skozi katera start-up prehajajo. Vsako od sedmih tipičnih faz financiranja start-up podjetij opredelimo z vidika značilnosti, ki za start-up podjetja v specifični razvojni fazi veljajo. Prvo polovico teoretičnega dela zaključi...

  2. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...

  3. Integrating Social Activity Theory and Critical Discourse Analysis: A Multilayered Methodological Model for Examining Knowledge Mediation in Mentoring

    Science.gov (United States)

    Becher, Ayelet; Orland-Barak, Lily

    2016-01-01

    This study suggests an integrative qualitative methodological framework for capturing complexity in mentoring activity. Specifically, the model examines how historical developments of a discipline direct mentors' mediation of professional knowledge through the language that they use. The model integrates social activity theory and a framework of…

  4. The Industry 4.0 Journey: Start the Learning Journey with the Reference Architecture Model Industry 4.0

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John

    2017-01-01

    The wave of the fourth industrial revolution (Industry 4.0) is breaking on manufacturing companies. In manufacturing, one of the buzzwords of the moment is "Smart production". Smart production involves manufacturing equipment with many sensors that can generate and transmit large amounts of data...... Model Industry 4.0 (RAMI4.0) standard for Smart production. The instantiation contributed to organizational learning in the laboratory by collecting and sharing up-to-date information concerning manufacturing equipment....

  5. A methodology for the design and testing of atmospheric boundary layer models for wind energy applications

    Directory of Open Access Journals (Sweden)

    J. Sanz Rodrigo

    2017-02-01

    Full Text Available The GEWEX Atmospheric Boundary Layer Studies (GABLS 1, 2 and 3 are used to develop a methodology for the design and testing of Reynolds-averaged Navier–Stokes (RANS atmospheric boundary layer (ABL models for wind energy applications. The first two GABLS cases are based on idealized boundary conditions and are suitable for verification purposes by comparing with results from higher-fidelity models based on large-eddy simulation. Results from three single-column RANS models, of 1st, 1.5th and 2nd turbulence closure order, show high consistency in predicting the mean flow. The third GABLS case is suitable for the study of these ABL models under realistic forcing such that validation versus observations from the Cabauw meteorological tower are possible. The case consists on a diurnal cycle that leads to a nocturnal low-level jet and addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The simulations are evaluated in terms of surface-layer fluxes and wind energy quantities of interest: rotor equivalent wind speed, hub-height wind direction, wind speed shear and wind direction veer. The characterization of mesoscale forcing is based on spatially and temporally averaged momentum budget terms from Weather Research and Forecasting (WRF simulations. These mesoscale tendencies are used to drive single-column models, which were verified previously in the first two GABLS cases, to first demonstrate that they can produce similar wind profile characteristics to the WRF simulations even though the physics are more simplified. The added value of incorporating different forcing mechanisms into microscale models is quantified by systematically removing forcing terms in the momentum and heat equations. This mesoscale-to-microscale modeling approach is affected, to a large extent, by the input uncertainties of the mesoscale

  6. Flying personal planes: modeling the airport choices of general aviation pilots using stated preference methodology.

    Science.gov (United States)

    Camasso, M J; Jagannathan, R

    2001-01-01

    This study employed stated preference (SP) models to determine why general aviation pilots choose to base and operate their aircraft at some airports and not others. Thirteen decision variables identified in pilot focus groups and in the general aviation literature were incorporated into a series of hypothetical choice tasks or scenarios. The scenarios were offered within a fractional factorial design to establish orthogonality and to preclude dominance in any combination of variables. Data from 113 pilots were analyzed for individual differences across pilots using conditional logit regression with and without controls. The results demonstrate that some airport attributes (e.g., full-range hospitality services, paved parallel taxiway, and specific types of runway lighting and landing aids) increase pilot utility. Heavy airport congestion and airport landing fees, on the other hand, decrease pilot utility. The importance of SP methodology as a vehicle for modeling choice behavior and as an input into the planning and prioritization process is discussed. Actual or potential applications include the development of structured decision-making instruments in the behavioral sciences and in human service programs.

  7. What Can Be Learned From a Laboratory Model of Conceptual Change? Descriptive Findings and Methodological Issues

    Science.gov (United States)

    Ohlsson, Stellan; Cosejo, David G.

    2014-07-01

    The problem of how people process novel and unexpected information— deep learning (Ohlsson in Deep learning: how the mind overrides experience. Cambridge University Press, New York, 2011)—is central to several fields of research, including creativity, belief revision, and conceptual change. Researchers have not converged on a single theory for conceptual change, nor has any one theory been decisively falsified. One contributing reason is the difficulty of collecting informative data in this field. We propose that the commonly used methodologies of historical analysis, classroom interventions, and developmental studies, although indispensible, can be supplemented with studies of laboratory models of conceptual change. We introduce re- categorization, an experimental paradigm in which learners transition from one definition of a categorical concept to another, incompatible definition of the same concept, a simple form of conceptual change. We describe a re-categorization experiment, report some descriptive findings pertaining to the effects of category complexity, the temporal unfolding of learning, and the nature of the learner's final knowledge state. We end with a brief discussion of ways in which the re-categorization model can be improved.

  8. Modeling and optimization of ammonia treatment by acidic biochar using response surface methodology

    Directory of Open Access Journals (Sweden)

    Narong Chaisongkroh

    2012-09-01

    Full Text Available Emission of ammonia (NH3 contaminated waste air to the atmosphere without treatment has affected humans andenvironment. Eliminating NH3 in waste air emitted from industries is considered an environmental requisite. In this study,optimization of NH3 adsorption time using acidic rubber wood biochar (RWBs impregnated with sulfuric acid (H2SO4 wasinvestigated. The central composite design (CCD in response surface methodology (RSM by the Design Expert softwarewas used for designing the experiments as well as the full response surface estimation. The RSM was used to evaluate theeffect of adsorption parameters in continuous mode of fixed bed column including waste air flow rate, inlet NH3 concentration in waste air stream, and H2SO4 concentration for adsorbent surface modification. Based on statistical analysis, the NH3symmetric adsorption time (at 50% NH3 removal efficiency model proved to be very highly significant (p<0.0001. The optimum conditions obtained were 300 ppmv inlet NH3 concentration, 72% H2SO4, and 2.1 l/min waste air flow rate. This resultedin 219 minutes of NH3 adsorption time as obtained from the predicted model, which fitted well with the laboratory verification result. This was supported by the high value of coefficient of determination (R2=0.9137. (NH42SO4, a nitrogen fertilizerfor planting, was the by-product from chemical adsorption between NH3 and H2SO4.

  9. Prognosis-a wearable health-monitoring system for people at risk: methodology and modeling.

    Science.gov (United States)

    Pantelopoulos, Alexandros; Bourbakis, Nikolaos G

    2010-05-01

    Wearable health-monitoring systems (WHMSs) represent the new generation of healthcare by providing real-time unobtrusive monitoring of patients' physiological parameters through the deployment of several on-body and even intrabody biosensors. Although several technological issues regarding WHMS still need to be resolved in order to become more applicable in real-life scenarios, it is expected that continuous ambulatory monitoring of vital signs will enable proactive personal health management and better treatment of patients suffering from chronic diseases, of the elderly population, and of emergency situations. In this paper, we present a physiological data fusion model for multisensor WHMS called Prognosis. The proposed methodology is based on a fuzzy regular language for the generation of the prognoses of the health conditions of the patient, whereby the current state of the corresponding fuzzy finite-state machine signifies the current estimated health state and context of the patient. The operation of the proposed scheme is explained via detailed examples in hypothetical scenarios. Finally, a stochastic Petri net model of the human-device interaction is presented, which illustrates how additional health status feedback can be obtained from the WHMS' user.

  10. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    Science.gov (United States)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  11. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models

    Science.gov (United States)

    Coppolino, Robert N.

    2018-01-01

    Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.

  12. Methodology for Measurement the Energy Efficiency Involving Solar Heating Systems Using Stochastic Modelling

    Directory of Open Access Journals (Sweden)

    Bruno G. Menita

    2017-01-01

    Full Text Available The purpose of the present study is to evaluate gains through measurement and verification methodology adapted from the International Performance Measurement and Verification Protocol, from case studies involving Energy Efficiency Projects in the Goias State, Brazil. This paper also presents the stochastic modelling for the generation of future scenarios of electricity saving resulted by these Energy Efficiency Projects. The model is developed by using the Geometric Brownian Motion Stochastic Process with Mean Reversion associated with the Monte Carlo simulation technique. Results show that the electricity saved from the replacement of electric showers by solar water heating systems in homes of low-income families has great potential to bring financial benefits to such families, and that the reduction in peak demand obtained from this Energy Efficiency Action is advantageous to the Brazilian electrical system. Results contemplate also the future scenarios of electricity saving and a sensitivity analysis in order to verify how values of some parameters influence on the results, once there is no historical data available for obtaining these values.

  13. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  14. Effective Swimmer's Action during the Grab Start Technique.

    Directory of Open Access Journals (Sweden)

    Luis Mourão

    Full Text Available The external forces applied in swimming starts have been often studied, but using direct analysis and simple interpretation data processes. This study aimed to develop a tool for vertical and horizontal force assessment based on the swimmers' propulsive and structural forces (passive forces due to dead weight applied during the block phase. Four methodological pathways were followed: the experimented fall of a rigid body, the swimmers' inertia effect, the development of a mathematical model to describe the outcome of the rigid body fall and its generalization to include the effects of the inertia, and the experimental swimmers' starting protocol analysed with the inclusion of the developed mathematical tool. The first three methodological steps resulted in the description and computation of the passive force components. At the fourth step, six well-trained swimmers performed three 15 m maximal grab start trials and three-dimensional (3D kinetic data were obtained using a six degrees of freedom force plate. The passive force contribution to the start performance obtained from the model was subtracted from the experimental force due to the swimmers resulting in the swimmers' active forces. As expected, the swimmers' vertical and horizontal active forces accounted for the maximum variability contribution of the experimental forces. It was found that the active force profile for the vertical and horizontal components resembled one another. These findings should be considered in clarifying the active swimmers' force variability and the respective geometrical profile as indicators to redefine steering strategies.

  15. Genetic Algorithm-Based Optimization Methodology of Bézier Curves to Generate a DCI Microscale-Model

    Directory of Open Access Journals (Sweden)

    Jesus A. Basurto-Hurtado

    2017-11-01

    Full Text Available The aim of this article is to develop a methodology that is capable of generating micro-scale models of Ductile Cast Irons, which have the particular characteristic to preserve the smoothness of the graphite nodules contours that are lost by discretization errors when the contours are extracted using image processing. The proposed methodology uses image processing to extract the graphite nodule contours and a genetic algorithm-based optimization strategy to select the optimal degree of the Bézier curve that best approximate each graphite nodule contour. To validate the proposed methodology, a Finite Element Analysis (FEA was carried out using models that were obtained through three methods: (a using a fixed Bézier degree for all of the graphite nodule contours, (b the present methodology, and (c using a commercial software. The results were compared using the relative error of the equivalent stresses computed by the FEA, where the proposed methodology results were used as a reference. The present paper does not have the aim to define which models are the correct and which are not. However, in this paper, it has been shown that the errors generated in the discretization process should not be ignored when developing geometric models since they can produce relative errors of up to 35.9% when an estimation of the mechanical behavior is carried out.

  16. Scenario Methodology for Modelling of Future Landscape Developments as Basis for Assessing Ecosystem Services

    Directory of Open Access Journals (Sweden)

    Matthias Rosenberg

    2014-04-01

    Full Text Available The ecosystems of our intensively used European landscapes produce a variety of natural goods and services for the benefit of humankind, and secure the basics and quality of life. Because these ecosystems are still undergoing fundamental changes, the interest of the society is to know more about future developments and their ecological impacts. To describe and analyze these changes, scenarios can be developed and an assessment of the ecological changes can be carried out subsequently. In the project „Landscape Saxony 2050“; a methodology for the construction of exploratory scenarios was worked out. The presented methodology provides a possibility to identify the driving forces (socio-cultural, economic and ecological conditions of the landscape development. It allows to indicate possible future paths which lead to a change of structures and processes in the landscape and can influence the capability to provide ecosystem services. One essential component of the applied technique is that an approach for the assessment of the effects of the landscape changes on ecosystem services is integrated into the developed scenario methodology. Another is, that the methodology is strong designed as participatory, i.e. stakeholders are integrated actively. The method is a seven phase model which provides the option for the integration of the stakeholders‘ participation at all levels of scenario development. The scenario framework was applied to the district of Görlitz, an area of 2100 sq km located at the eastern border of Germany. The region is affected by strong demographic as well as economic changes. The core issue focused on the examination of landscape change in terms of biodiversity. Together with stakeholders, a trend scenario and two alternative scenarios were developed. The changes of the landscape structure are represented in story lines, maps and tables. On basis of the driving forces of the issue areas „cultural / social values“ and

  17. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  18. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  19. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology

    Science.gov (United States)

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-01-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter’s L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R2) for their absorbance, Hunter’s L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter’s b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter’s b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine. PMID:28401086

  20. Modelling of aflatoxin G1 reduction by kefir grain using response surface methodology.

    Science.gov (United States)

    Ansari, Farzaneh; Khodaiyan, Faramarz; Rezaei, Karamatollah; Rahmani, Anosheh

    2015-01-01

    Aflatoxin G1 (AFG1) is one of the main toxic contaminants in pistachio nuts and causes potential health hazards. Hence, AFG1 reduction is one of the main concerns in food safety. Kefir-grains contain symbiotic association of microorganisms well known for their aflatoxin decontamination effects. In this study, a central composite design (CCD) using response surface methodology (RSM) was applied to develop a model in order to predict AFG1 reduction in pistachio nuts by kefir-grain (already heated at 70 and 110°C). The independent variables were: toxin concentration (X1: 5, 10, 15, 20 and 25 ng/g), kefir-grain level (X2: 5, 10, 20, 10 and 25%), contact time (X3: 0, 2, 4, 6 and 8 h), and incubation temperature (X4: 20, 30, 40, 50 and 60°C). There was a significant reduction in AFG1 (p kefir-grain used. The variables including X1, X3 and the interactions between X2-X4 as well as X3-X4 have significant effects on AFG1 reduction. The model provided a good prediction of AFG1 reduction under the assay conditions. Optimization was used to enhance the efficiency of kefir-grain on AFG1 reduction. The optimum conditions for the highest AFG1 reduction (96.8%) were predicted by the model as follows: toxin concentration = 20 ng/g, kefir-grain level = 10%, contact time = 6 h, and incubation temperature = 30°C which validated practically in six replications.

  1. Adsorption of cellulase on cereal brans: a simple functional model from response surface methodology

    Directory of Open Access Journals (Sweden)

    Rui Sergio F. da Silva

    1980-11-01

    Full Text Available A functional model based on Langmuirian adsorption as a limiting mechanism was proposed to explain the effect of cellulase during the enzymatic pretreatment of bran, conducted prior to extraction of proteins, by wet alkaline process from wheat and buckwheat bran materials. The proposed model provides a good fit (r = 0.99 for the data generated thru predictive model taken from the response surface methodology, permitting calculation of a affinity constant (b and capacity constant (k, for wheat bran (b = 0.255 g/IU and k = 17.42% and buckwheat bran (b = 0.066g/IUand k = 78.74%.Modelo funcional baseado na adsorção de Langmuir como mecanismo limitante proposto para explicar o efeito da celulase durante o pré-tratamento enzimático de farelos, visando à extração de proteínas, através do método alcalino-úmido. O referido modelo ajusta se muito bem (r = 0,99 aos dados gerados com base em modelo preditivo obtido da metodologia da superfície de resposta. Pode-se calcular a constante de afinidade (b e a constante de capacidade (k para o farelo de trigo e farelo de trigo mourisco (sarraceno, usando uma equação análoga à isoterma de adsorção de Langmuir. Os resultados indicaram que o farelo de trigo mourisco apresenta uma capacidade mais alta para adsorver celulase e, conseqüentemente,'pode-se esperar uma resposta maior ao pré-tratamento com esta enzima.

  2. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology.

    Science.gov (United States)

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-03-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter's L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R 2 ) for their absorbance, Hunter's L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter's b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter's b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine.

  3. A Comparison Study of a Generic Coupling Methodology for Modeling Wake Effects of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Tim Verbrugghe

    2017-10-01

    Full Text Available Wave Energy Converters (WECs need to be deployed in large numbers in an array layout in order to have a significant power production. Each WEC has an impact on the incoming wave field, by diffracting, reflecting and radiating waves. Simulating the wave transformations within and around a WEC array is complex; it is difficult, or in some cases impossible, to simulate both these near-field and far-field wake effects using a single numerical model, in a time- and cost-efficient way in terms of computational time and effort. Within this research, a generic coupling methodology is developed to model both near-field and far-field wake effects caused by floating (e.g., WECs, platforms or fixed offshore structures. The methodology is based on the coupling of a wave-structure interaction solver (Nemoh and a wave propagation model. In this paper, this methodology is applied to two wave propagation models (OceanWave3D and MILDwave, which are compared to each other in a wide spectrum of tests. Additionally, the Nemoh-OceanWave3D model is validated by comparing it to experimental wave basin data. The methodology proves to be a reliable instrument to model wake effects of WEC arrays; results demonstrate a high degree of agreement between the numerical simulations with relative errors lower than 5 % and to a lesser extent for the experimental data, where errors range from 4 % to 17 % .

  4. Engine Cold Start

    Science.gov (United States)

    2015-09-01

    UNCLASSIFIED UNCLASSIFIED ENGINE COLD START INTERIM REPORT TFLRF No. 469 by Douglas M. Yost Gregory A. T. Hansen U.S...not return it to the originator. UNCLASSIFIED UNCLASSIFIED ENGINE COLD START INTERIM REPORT TFLRF No. 469 by Douglas M. Yost...TITLE AND SUBTITLE Engine Cold Start 5a. CONTRACT NUMBER W56HZV-09-C-0100 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Douglas

  5. Optimization and Modeling of Process Variables of Biodiesel Production from Marula Oil using Response Surface Methodology

    International Nuclear Information System (INIS)

    Enweremadu, C. C.; Rutto, H. L.

    2015-01-01

    This paper presents an optimization study in the production of biodiesel production from Marula oil. The study was carried out using a central composite design of experiments under response surface methodology. A mathematical model was developed to correlate the transesterification process variables to biodiesel yield. The transesterification reaction variables were methanol to oil ratio, x /sub 1/ (10-50 wt percentage), reaction time, x /sub 2/ (30-90 min), reaction temperature, x /sub 3/ (30-90 Degree C) stirring speed, x /sub 4/ (100-400 rpm) and amount of catalyst, x /sub 5/ (0.5-1.5 g). The optimum conditions for the production of the biodiesel were found to be methanol to oil ratio (29.43 wt percentage), reaction time (59.17 minutes), reaction temperature (58.80 Degree C), stirring speed (325 rpm) and amount of catalyst (1.02 g). The optimum yield of biodiesel that can be produced was 95 percentage. The results revealed that the crucial fuel properties of the biodiesel produced at the optimum conditions met the ASTM biodiesel specifications. (author)

  6. Modeling and Design Analysis Methodology for Tailoring of Aircraft Structures with Composites

    Science.gov (United States)

    Rehfield, Lawrence W.

    2004-01-01

    Composite materials provide design flexibility in that fiber placement and orientation can be specified and a variety of material forms and manufacturing processes are available. It is possible, therefore, to 'tailor' the structure to a high degree in order to meet specific design requirements in an optimum manner. Common industrial practices, however, have limited the choices designers make. One of the reasons for this is that there is a dearth of conceptual/preliminary design analysis tools specifically devoted to identifying structural concepts for composite airframe structures. Large scale finite element simulations are not suitable for such purposes. The present project has been devoted to creating modeling and design analysis methodology for use in the tailoring process of aircraft structures. Emphasis has been given to creating bend-twist elastic coupling in high aspect ratio wings or other lifting surfaces. The direction of our work was in concert with the overall NASA effort Twenty- First Century Aircraft Technology (TCAT). A multi-disciplinary team was assembled by Dr. Damodar Ambur to work on wing technology, which included our project.

  7. Methodological application so as to obtain digital elevation models DEM in wetland areas

    International Nuclear Information System (INIS)

    Quintero, Deiby A; Montoya V, Diana M; Betancur, Teresita

    2009-01-01

    In order to understand hydrological systems and the description of flow processes that occur among its components it is essential to have a physiographic description that morphometric and relief characteristics. When local studies are performed, the basic cartography available, in the best case 1:25,000 scale, tends not to obey the needs required to represent the water dynamics that characterize the interactions between streams, aquifers and lenticular water bodies in flat zones particularly in those where there are wetlands localized in ancient F100D plains of rivers. A lack of financial resources is the principal obstacle to acquiring; information that is current and sufficient for the scale of the project. Geomorphologic conditions of flat relief zones are a good alternative for the construction of the new data. Using the basic cartography available and the new data, it is possible to obtain DEMs that are improved and consistent with the dynamics of surface and groundwater flows in the hydrological system. To accomplish this one must use spatial modeling tools coupled with Geographic Information System - GIS. This article present a methodological application for the region surrounding the catchment of wetland Cienaga Colombia in the Bajo Cauca region of Antioquia.

  8. Multiphysics Simulation of Welding-Arc and Nozzle-Arc System: Mathematical-Model, Solution-Methodology and Validation

    Science.gov (United States)

    Pawar, Sumedh; Sharma, Atul

    2018-01-01

    This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.

  9. A comparative review of multi-risk modelling methodologies for climate change adaptation in mountain regions

    Science.gov (United States)

    Terzi, Stefano; Torresan, Silvia; Schneiderbauer, Stefan

    2017-04-01

    Keywords: Climate change, mountain regions, multi-risk assessment, climate change adaptation. Climate change has already led to a wide range of impacts on the environment, the economy and society. Adaptation actions are needed to cope with the impacts that have already occurred (e.g. storms, glaciers melting, floods, droughts) and to prepare for future scenarios of climate change. Mountain environment is particularly vulnerable to the climate changes due to its exposure to recent climate warming (e.g. water regime changes, thawing of permafrost) and due to the high degree of specialization of both natural and human systems (e.g. alpine species, valley population density, tourism-based economy). As a consequence, the mountain local governments are encouraged to undertake territorial governance policies to climate change, considering multi-risks and opportunities for the mountain economy and identifying the best portfolio of adaptation strategies. This study aims to provide a literature review of available qualitative and quantitative tools, methodological guidelines and best practices to conduct multi-risk assessments in the mountain environment within the context of climate change. We analyzed multi-risk modelling and assessment methods applied in alpine regions (e.g. event trees, Bayesian Networks, Agent Based Models) in order to identify key concepts (exposure, resilience, vulnerability, risk, adaptive capacity), climatic drivers, cause-effect relationships and socio-ecological systems to be integrated in a comprehensive framework. The main outcomes of the review, including a comparison of existing techniques based on different criteria (e.g. scale of analysis, targeted questions, level of complexity) and a snapshot of the developed multi-risk framework for climate change adaptation will be here presented and discussed.

  10. A METHODOLOGICAL MODEL FOR INTEGRATING CHARACTER WITHIN CONTENT AND LANGUAGE INTEGRATED LEARNING IN SOCIOLOGY OF RELIGION

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2013-01-01

    Full Text Available In this article, I describe a methodological model I used in a experimental study on how to integrate character within the practice of Content and Language Integrated Learning (CLIL at the higher education Indonesia.This research can be added to research about character education and CLIL in tertiary education, giving nuances to the practice of CLIL so far predominantly a practice in primary and secondary schools.The research was conducted in Semarang State University, in the Department of Sociology and Anthropology, in Sociology of Religion bilingual class. The research indicates that the integration of character within CLIL enrich the perspective of CLIL by strengthening the use of CLIL for intellectual growth and moral development. On the other side, the use of CLIL with character education gives methods and perspectives to the practice of character education so far which so far only emphasise contents reforms without learning methods reforms. The research also reveals that the weakness of CLIL in using text for classroom learning can be overcome bythe use ofspecific reading and writing strategies. I develop a practical text strategy which can be effectively used in highly conceptual subject such as sociology of religion. Artikel ini bertujuan untuk mendeskripsikan model metodologis yang saya pakai untuk mengintegrasikannya karakter dalam Content and Language Integrated Learning (CLIL pada pendidikan tinggi di Indonesia. Penelitian ini memperkaya penelitian mengenai pendidikan karakter dan penerapan CLIL di perguruan tinggi, selama ini penelitian semacam itu hanya biasa di level lebih rendah. Penelitian dilakukan di Universitas Negeri Semarang, pada kelas bilingual yang diikuti 25 mahasiswa, dan diujikan pada mata kuliah Sosiologi Agama. Pelajaran dari penelitian ini adalah integrasi karakter dalam CLIL dapat memperkaya CLIL. Sebaliknya penggunaan CLIL untuk mendidikkan karakter di kelas bilingual mampu menjawab berbagai tantangan pendidikan

  11. An integrated empirical and modeling methodology for analyzing solar reflective roof technologies on commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jo, J.H. [School of Sustainability, Arizona State University, P.O. Box 875502 21 E. 6th Street, Suite. 120, Tempe, AZ 85287-5502 (United States); Carlson, J.D. [National Center of Excellence on SMART Innovations for Urban Climate and Energy, Arizona State University, Tempe, AZ (United States); Golden, J.S. [School of Sustainability, Arizona State University, P.O. Box 875502 21 E. 6th Street, Suite. 120, Tempe, AZ 85287-5502 (United States); National Center of Excellence on SMART Innovations for Urban Climate and Energy, Arizona State University, Tempe, AZ (United States); Bryan, H. [School of Architecture, Arizona State University, Tempe, AZ (United States)

    2010-02-15

    Buildings impact the environment in many ways as a result of both their energy use and material consumption. In urban areas, the emission of greenhouse gases and the creation of microclimates are among their most prominent impacts so the adoption of building design strategies and materials that address both these issues will lead to significant reductions in a building's overall environmental impact. This report documents the energy savings and surface temperature reduction achieved by replacing an existing commercial building's flat roof with a more reflective 'cool roof' surface material. The research methodology gathered data on-site (surface temperatures and reflectivity) and used this in conjunction with the as-built drawings to construct a building energy simulation model. A 20-year cost benefit analysis (CBA) was conducted to determine the return on investment (ROI) for the new cool roof construction based on the energy simulation results. The results of the EnergyPlus trademark simulation modeling revealed that reductions of 1.3-1.9% and 2.6-3.8% of the total monthly electricity consumption can be achieved from the 50% cool roof replacement already implemented and a future 100% roof replacement, respectively. This corresponds to a saving of approximately $22,000 per year in energy costs at current prices and a consequent 9-year payback period for the added cost of installing the 100% cool roof. The environmental benefits associated with these electricity savings, particularly the reductions in environmental damage and peak-time electricity demand, represent the indirect benefits of the cool roof system. (author)

  12. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  13. Methodologies Related to Computational models in View of Developing Anti-Alzheimer Drugs: An Overview.

    Science.gov (United States)

    Baheti, Kirtee; Kale, Mayura Ajay

    2018-04-17

    carried out on various heterocyclic scaffolds that can serve as lead compounds to design Anti-Alzheimer's drugs in future. The molecular modeling methods can thus become a better alternative for discovery of newer Anti-Alzheimer agents. This methodology is extremely useful to design drugs in minimum time, with enhanced activity keeping balanced ethical considerations. Thus, the researchers are opting for this improved process over the conventional methods hoping to achieve a sure shot way out for the sufferings of people affected by Alzheimer besides other diseases. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Model-based Organization Manning, Strategy, and Structure Design via Team Optimal Design (TOD) Methodology

    National Research Council Canada - National Science Library

    Levchuk, Georgiy; Chopra, Kari; Paley, Michael; Levchuk, Yuri; Clark, David

    2005-01-01

    This paper describes a quantitative Team Optimal Design (TOD) methodology and its application to the design of optimized manning for E-10 Multi-sensor Command and Control Aircraft. The E-10 (USAF, 2002...

  15. Operation room tool handling and miscommunication scenarios: an object-process methodology conceptual model.

    Science.gov (United States)

    Wachs, Juan P; Frenkel, Boaz; Dori, Dov

    2014-11-01

    Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at

  16. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.; Price, Phillip N. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  17. Epilepsy Therapy Development: Technical and Methodological Issues in Studies with Animal Models

    Science.gov (United States)

    Galanopoulou, Aristea S.; Kokaia, Merab; Loeb, Jeffrey A.; Nehlig, Astrid; Pitkänen, Asla; Rogawski, Michael A.; Staley, Kevin J.; Whittemore, Vicky H.; Dudek, F. Edward

    2013-01-01

    SUMMARY The search for new treatments for seizures, epilepsies and their comorbidities faces considerable challenges. Partly, this is due to gaps in our understanding of the etiology and pathophysiology of most forms of epilepsy. An additional challenge is the difficulty to predict the efficacy, tolerability and impact of potential new treatments on epilepsies and comorbidities in humans, using the available resources. Here we provide a summary of the discussions and proposals of the Working Group 2 as presented in the Joint American Epilepsy Society and International League Against Epilepsy Translational Workshop in London (September 2012). We propose methodological and reporting practices that will enhance the uniformity, reliability and reporting of early stage preclinical studies with animal seizure and epilepsy models that aim to develop and evaluate new therapies for seizures or epilepsies, using multi-disciplinary approaches. The topics considered include: (a) implementation of better study design and reporting practices, (b) incorporation in the study design and analysis of covariants that may impact outcomes (including species, age, sex), (c) utilization of approaches to document target relevance, exposure and engagement by the tested treatment, (d) utilization of clinically relevant treatment protocols, (e) optimization of the use of video-EEG recordings to best meet the study goals, and (f) inclusion of outcome measures that address the tolerability of the treatment or study endpoints apart from seizures. We further discuss the different expectations for studies aiming to meet regulatory requirements to obtain approval for clinical testing in humans. Implementation of the rigorous practices discussed in this report will require considerable investment in time, funds and other research resources, which may create challenges for academic researchers seeking to contribute to epilepsy therapy discovery and development. We propose several infrastructure

  18. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    Energy Technology Data Exchange (ETDEWEB)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  19. Starting an aphasia center?

    Science.gov (United States)

    Elman, Roberta J

    2011-08-01

    Starting an aphasia center can be an enormous challenge. This article provides initial issues to review and consider when deciding whether starting a new organization is right for you. Determining the need for the program in your community, the best size and possible affiliation for the organization, and available resources, as well as developing a business plan, marketing the program, and building awareness in the community, are some of the factors that are discussed. Specific examples related to starting the Aphasia Center of California are provided. © Thieme Medical Publishers.

  20. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    -by-one the different classes of chemicals, until a formulation is obtained, the stability of which as en emulsion is finally checked with appropriate models. Structured databases, appropriate pure component as well as mixture property models, rule-based selection criteria and CAMD techniques are employed together...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  1. Metodología de pesquisa preclínica de actividad anti-herpesvirus a partir de productos naturales Methodology for the preclinical screening of anti-herpesvirus activity starting from natural products

    Directory of Open Access Journals (Sweden)

    Gloria del Barrio Alonso

    2008-08-01

    Full Text Available Herpesviridae es una de las familias virales de mayor impacto en la salud humana y animal. Los virus del herpes simple constituyen causa de infecciones muy comunes, con un amplio espectro de manifestaciones clínicas. La naturaleza latente de la infección que le permite al virus escapar de los efectores de la respuesta inmunológica, ha imposibilitado la obtención de vacunas antiherpéticas eficaces. Esto ha motivado la búsqueda de productos antiherpéticos, lo cual trae consigo la necesidad de establecer un sistema de evaluación preclínica de productos naturales y sintéticos mediante una metodología de pesquisa rápida in vitro. En el presente trabajo se muestra la metodología y fundamentación del sistema de ensayos empleado por el Grupo de Antivirales Naturales de la Facultad de Biología (Universidad de La Habana para el pesquisaje de productos con propiedades antiherpéticas. Dicha metodología incluye la evaluación primaria y varios ensayos secundarios, encaminados a la determinación de los mecanismos de acción.Herpesviridae is one of the viral families of greatest impact on human and animal health. HSVs are the etiological agents of very common infections associated with a broad range of clinical symptoms. The latent nature of the infection allows the virus to escape from immune responses, hindering the obtention of efficient anti-herpes vaccines. This fact has encouraged scientists to search new anti-herpes drugs, and that's why the establishment of a rapid methodology for the in vitro evaluation of new products with potential antiviral activity is urgently needed. In this work, we describe the complete guide followed by the Antiviral Natural Products Research Group (Faculty of Biology, University of Havana to perform the screening for antiviral activities among natural products. This guide includes the preliminary evaluation assay and a variety of secondary tests.

  2. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    International Nuclear Information System (INIS)

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    The hydrological modelling system MIKE SHE has been used to describe near-surface groundwater flow, transport mechanisms and the contact between ground- and surface water at the Forsmark site. The surface water system at Forsmark is described with the 1D modelling tool MIKE 11, which is fully and dynamically integrated with MIKE SHE. In spring 2007, a new data freeze will be available and a process of updating, rebuilding and calibrating the MIKE SHE model will start, based on the latest data set. Prior to this, it is important to gather as much knowledge as possible on calibration methods and to define critical calibration parameters and areas within the model. In this project, an optimization of the numerical description and an initial calibration of the MIKE SHE model has been made, and an updated base case has been defined. Data from 5 surface water level monitoring stations, 4 surface water discharge monitoring stations and 32 groundwater level monitoring stations (SFM soil boreholes) has been used for model calibration and evaluation. The base case simulations generally show a good agreement between calculated and measured water levels and discharges, indicating that the total runoff from the area is well described by the model. Moreover, with two exceptions (SFM0012 and SFM0022) the base case results show very good agreement between calculated and measured groundwater head elevations for boreholes installed below lakes. The model also shows a reasonably good agreement between calculated and measured groundwater head elevations or depths to phreatic surfaces in many other points. The following major types of calculation-measurement differences can be noted: Differences in groundwater level amplitudes due to transpiration processes. Differences in absolute mean groundwater head, due to differences between borehole casing levels and the interpolated DEM. Differences in absolute mean head elevations, due to local errors in hydraulic conductivity values

  3. Head Start Impact Study

    Data.gov (United States)

    U.S. Department of Health & Human Services — Nationally representative, longitudinal information from an evaluation where children were randomly assigned to Head Start or community services as usual;direct...

  4. Early Head Start Evaluation

    Data.gov (United States)

    U.S. Department of Health & Human Services — Longitudinal information from an evaluation where children were randomly assigned to Early Head Start or community services as usual;direct assessments and...

  5. FEMA DFIRM Station Start

    Data.gov (United States)

    Minnesota Department of Natural Resources — This table contains information about station starting locations. These locations indicate the reference point that was used as the origin for distance measurements...

  6. A METHODOLOGICAL MODEL FOR INTEGRATING CHARACTER WITHIN CONTENT AND LANGUAGE INTEGRATED LEARNING IN SOCIOLOGY OF RELIGION

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2014-02-01

    Full Text Available AbstractIn this article, I describe a methodological model I used in a experimental study on how to integrate character within the practice of Content and Language Integrated Learning (CLIL at the higher education Indonesia.This research can be added to research about character education and CLIL in tertiary education, giving nuances to the practice of CLIL so far predominantly a practice in primary and secondary schools.The research was conducted in Semarang State University, in the Department of Sociology and Anthropology, in Sociology of Religion bilingual class. The research indicates that the integration of character within CLIL enrich the perspective of CLIL by strengthening the use of CLIL for intellectual growth and moral development. On the other side, the use of CLIL with character education gives methods and perspectives to the practice of character education which so far only emphasise contents reforms without learning methods reforms. The research also reveals that the weakness of CLIL in using text for classroom learning can be overcome by the use of specific reading and writing strategies. I develop a practical text strategy which can be effectively used in highly conceptual subject such as sociology of religion. AbstrakArtikel ini bertujuan untuk mendeskripsikan model metodologis yang saya pakai untuk mengintegrasikannya karakter dalam Content and Language Integrated Learning (CLIL pada pendidikan tinggi di Indonesia. Penelitian ini memperkaya penelitian mengenai pendidikan karakter dan penerapan CLIL di perguruan tinggi, selama ini penelitian semacam itu hanya biasa di level lebih rendah. Penelitian dilakukan di Universitas Negeri Semarang, pada kelas bilingual yang diikuti 25 mahasiswa, dan diujikan pada mata kuliah Sosiologi Agama. Pelajaran dari penelitian ini adalah integrasi karakter dalam CLIL dapat memperkaya CLIL. Sebaliknya penggunaan CLIL untuk mendidikkan karakter di kelas bilingual mampu menjawab berbagai tantangan

  7. Integrated methodological frameworks for modelling agent-based advanced supply chain planning systems: A systematic literature review

    Directory of Open Access Journals (Sweden)

    Luis Antonio Santa-Eulalia

    2011-12-01

    Full Text Available Purpose: The objective of this paper is to provide a systematic literature review of recent developments in methodological frameworks for the modelling and simulation of agent-based advanced supply chain planning systems.Design/methodology/approach: A systematic literature review is provided to identify, select and make an analysis and a critical summary of all suitable studies in the area. It is organized into two blocks: the first one covers agent-based supply chain planning systems in general terms, while the second one specializes the previous search to identify those works explicitly containing methodological aspects.Findings: Among sixty suitable manuscripts identified in the primary literature search, only seven explicitly considered the methodological aspects. In addition, we noted that, in general, the notion of advanced supply chain planning is not considered unambiguously, that the social and individual aspects of the agent society are not taken into account in a clear manner in several studies and that a significant part of the works are of a theoretical nature, with few real-scale industrial applications. An integrated framework covering all phases of the modelling and simulation process is still lacking in the literature visited.Research limitations/implications: The main research limitations are related to the period covered (last four years, the selected scientific databases, the selected language (i.e. English and the use of only one assessment framework for the descriptive evaluation part.Practical implications: The identification of recent works in the domain and discussion concerning their limitations can help pave the way for new and innovative researches towards a complete methodological framework for agent-based advanced supply chain planning systems.Originality/value: As there are no recent state-of-the-art reviews in the domain of methodological frameworks for agent-based supply chain planning, this paper contributes to

  8. Getting started with Unity

    CERN Document Server

    Felicia, Patrick

    2013-01-01

    Getting Started with Unity is written in an easy-to-follow tutorial format.""Getting Started with Unity"" is for[ 3D game developers[/color] who would like to learn how to use Unity3D and become familiar with its core features. This book is also suitable for intermediate users who would like to improve their skills. No prior knowledge of Unity3D is required.

  9. Proposition of a modeling and an analysis methodology of integrated reverse logistics chain in the direct chain

    International Nuclear Information System (INIS)

    Mimouni, F.; Abouabdellah, A.

    2016-01-01

    Propose a modeling and analysis methodology based on the combination of Bayesian networks and Petri networks of the reverse logistics integrated the direct supply chain. Network modeling by combining Petri and Bayesian network. Modeling with Bayesian network complimented with Petri network to break the cycle problem in the Bayesian network. Demands are independent from returns. Model can only be used on nonperishable products. Legislation aspects: Recycling laws; Protection of environment; Client satisfaction via after sale service. Bayesian network with a cycle combined with the Petri Network. (Author)

  10. A review and synthesis of late Pleistocene extinction modeling: progress delayed by mismatches between ecological realism, interpretation, and methodological transparency.

    Science.gov (United States)

    Yule, Jeffrey V; Fournier, Robert J; Jensen, Christopher X J; Yang, Jinyan

    2014-06-01

    Late Pleistocene extinctions occurred globally over a period of about 50,000 years, primarily affecting mammals of > or = 44 kg body mass (i.e., megafauna) first in Australia, continuing in Eurasia and, finally, in the Americas. Polarized debate about the cause(s) of the extinctions centers on the role of climate change and anthropogenic factors (especially hunting). Since the late 1960s, investigators have developed mathematical models to simulate the ecological interactions that might have contributed to the extinctions. Here, we provide an overview of the various methodologies used and conclusions reached in the modeling literature, addressing both the strengths and weaknesses of modeling as an explanatory tool. Although late Pleistocene extinction models now provide a solid foundation for viable future work, we conclude, first, that single models offer less compelling support for their respective explanatory hypotheses than many realize; second, that disparities in methodology (both in terms of model parameterization and design) prevent meaningful comparison between models and, more generally, progress from model to model in increasing our understanding of these extinctions; and third, that recent models have been presented and possibly developed without sufficient regard for the transparency of design that facilitates scientific progress.

  11. Methodological guidelines

    International Nuclear Information System (INIS)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-01-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs

  12. Methodological guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-04-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs.

  13. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  14. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States); Denning, Richard [The Ohio State Univ., Columbus, OH (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States); Unwin, Stephen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-23

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  15. A multiscale approach to blast neurotrauma modeling:Part II: Methodology for inducing blast injury to in vitro models

    Directory of Open Access Journals (Sweden)

    Gwen B. Effgen

    2012-02-01

    Full Text Available Due to the prominent role of improvised explosive devices (IEDs in wounding patterns of U.S. war-fighters in Iraq and Afghanistan, blast injury has risen to a new level of importance and is recognized to be a major cause of injuries to the brain. However, an injury risk-function for microscopic, macroscopic, behavioral, and neurological deficits has yet to be defined. While operational blast injuries can be very complex and thus difficult to analyze, a simplified blast injury model would facilitate studies correlating biological outcomes with blast biomechanics to define tolerance criteria. Blast-induced traumatic brain injury (bTBI results from the translation of a shock wave in air, such as that produced by an IED, into a pressure wave within the skull-brain complex. Our blast injury methodology recapitulates this phenomenon in vitro, allowing for control of the injury biomechanics via a compressed-gas shock tube used in conjunction with a custom-designed, fluid-filled receiver that contains the living culture. The receiver converts the air shock wave into a fast-rising pressure transient with minimal reflections, mimicking the intracranial pressure history in blast. We have developed an organotypic hippocampal slice culture model that exhibits cell death when exposed to a 530  17.7 kPa peak overpressure with a 1.026 ± 0.017 ms duration and 190 ± 10.7 kPa-ms impulse in-air. We have also injured a simplified in vitro model of the blood-brain barrier, which exhibits disrupted integrity immediately following exposure to 581  10.0 kPa peak overpressure with a 1.067 ms ± 0.006 ms duration and 222 ± 6.9 kPa-ms impulse in-air. To better prevent and treat bTBI, both the initiating biomechanics and the ensuing pathobiology must be understood in greater detail. A well-characterized, in vitro model of bTBI, in conjunction with animal models, will be a powerful tool for developing strategies to mitigate the risks of bTBI.

  16. A Methodology for Calculating EGS Electricity Generation Potential Based on the Gringarten Model for Heat Extraction From Fractured Rock

    Energy Technology Data Exchange (ETDEWEB)

    Augustine, Chad

    2017-05-01

    Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGS electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.

  17. A methodological framework to support the initiation, design and institutionalization of participatory modeling processes in water resources management

    Science.gov (United States)

    Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan

    2018-01-01

    Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.

  18. Methodological advances

    Directory of Open Access Journals (Sweden)

    Lebreton, J.-D.

    2004-06-01

    Full Text Available The study of population dynamics has long depended on methodological progress. Among many striking examples, continuous time models for populations structured in age (Sharpe & Lotka, 1911 were made possible by progress in the mathematics of integral equations. Therefore the relationship between population ecology and mathematical and statistical modelling in the broad sense raises a challenge in interdisciplinary research. After the impetus given in particular by Seber (1982, the regular biennial EURING conferences became a major vehicle to achieve this goal. It is thus not surprising that EURING 2003 included a session entitled “Methodological advances”. Even if at risk of heterogeneity in the topics covered and of overlap with other sessions, such a session was a logical way of ensuring that recent and exciting new developments were made available for discussion, further development by biometricians and use by population biologists. The topics covered included several to which full sessions were devoted at EURING 2000 (Anderson, 2001 such as: individual covariates, Bayesian methods, and multi–state models. Some other topics (heterogeneity models, exploited populations and integrated modelling had been addressed by contributed talks or posters. Their presence among “methodological advances”, as well as in other sessions of EURING 2003, was intended as a response to their rapid development and potential relevance to biological questions. We briefly review all talks here, including those not published in the proceedings. In the plenary talk, Pradel et al. (in prep. developed GOF tests for multi–state models. Until recently, the only goodness–of–fit procedures for multistate models were ad hoc, and non optimal, involving use of standard tests for single state models (Lebreton & Pradel, 2002. Pradel et al. (2003 proposed a general approach based in particular on mixtures of multinomial distributions. Pradel et al. (in prep. showed

  19. Reference methodologies for radioactive controlled discharges an activity within the IAEA's Program Environmental Modelling for Radiation Safety II (EMRAS II)

    International Nuclear Information System (INIS)

    Stocki, T.J.; Bergman, L.; Tellería, D.M.; Proehl, G.; Amado, V.; Curti, A.; Bonchuk, I.; Boyer, P.; Mourlon, C.; Chyly, P.; Heling, R.; Sági, L.; Kliaus, V.; Krajewski, P.; Latouche, G.; Lauria, D.C.; Newsome, L.; Smith, J.

    2011-01-01

    In January 2009, the IAEA EMRAS II (Environmental Modelling for Radiation Safety II) program was launched. The goal of the program is to develop, compare and test models for the assessment of radiological impacts to the public and the environment due to radionuclides being released or already existing in the environment; to help countries build and harmonize their capabilities; and to model the movement of radionuclides in the environment. Within EMRAS II, nine working groups are active; this paper will focus on the activities of Working Group 1: Reference Methodologies for Controlling Discharges of Routine Releases. Within this working group environmental transfer and dose assessment models are tested under different scenarios by participating countries and the results compared. This process allows each participating country to identify characteristics of their models that need to be refined. The goal of this working group is to identify reference methodologies for the assessment of exposures to the public due to routine discharges of radionuclides to the terrestrial and aquatic environments. Several different models are being applied to estimate the transfer of radionuclides in the environment for various scenarios. The first phase of the project involves a scenario of nuclear power reactor with a coastal location which routinely (continuously) discharges 60Co, 85Kr, 131I, and 137Cs to the atmosphere and 60Co, 137Cs, and 90Sr to the marine environment. In this scenario many of the parameters and characteristics of the representative group were given to the modelers and cannot be altered. Various models have been used by the different participants in this inter-comparison (PC-CREAM, CROM, IMPACT, CLRP POSEIDON, SYMBIOSE and others). This first scenario is to enable a comparison of the radionuclide transport and dose modelling. These scenarios will facilitate the development of reference methodologies for controlled discharges. (authors)

  20. TH-CD-202-07: A Methodology for Generating Numerical Phantoms for Radiation Therapy Using Geometric Attribute Distribution Models

    International Nuclear Information System (INIS)

    Dolly, S; Chen, H; Mutic, S; Anastasio, M; Li, H

    2016-01-01

    Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients within radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients

  1. A KBE genetic-causal cost modelling methodology for manufacturing cost contingency management

    NARCIS (Netherlands)

    Curran, R.; Gilmour, M.; McAlleean, C.; Kelly, P.

    2009-01-01

    The paper provides validated evidence of a robust methodology for the management of lean manufacturing cost contingency, with a particular focus on contingency regarding recurring work content. A truly concurrent engineering process is established by capturing a range of knowledge from the design,

  2. Coupling 2D Finite Element Models and Circuit Equations Using a Bottom-Up Methodology

    Science.gov (United States)

    2002-11-01

    EQUATIONS USING A BOTTOM-UP METHODOLOGY E. G6mezl, J. Roger-Folch2 , A. Gabald6nt and A. Molina’ ’Dpto. de Ingenieria Eldctrica. Universidad Polit...de Ingenieria Elictrica. ETSII. Universidad Politdcnica de Valencia. PO Box 22012, 46071. Valencia, Spain. E-mail: iroger adie.upv.es ABSTRACT The

  3. Physiologically Based Pharmacokinetic Modeling: Methodology, Applications, and Limitations with a Focus on Its Role in Pediatric Drug Development

    Directory of Open Access Journals (Sweden)

    Feras Khalil

    2011-01-01

    Full Text Available The concept of physiologically based pharmacokinetic (PBPK modeling was introduced years ago, but it has not been practiced significantly. However, interest in and implementation of this modeling technique have grown, as evidenced by the increased number of publications in this field. This paper demonstrates briefly the methodology, applications, and limitations of PBPK modeling with special attention given to discuss the use of PBPK models in pediatric drug development and some examples described in detail. Although PBPK models do have some limitations, the potential benefit from PBPK modeling technique is huge. PBPK models can be applied to investigate drug pharmacokinetics under different physiological and pathological conditions or in different age groups, to support decision-making during drug discovery, to provide, perhaps most important, data that can save time and resources, especially in early drug development phases and in pediatric clinical trials, and potentially to help clinical trials become more “confirmatory” rather than “exploratory”.

  4. Setting priorities in health research using the model proposed by the World Health Organization: development of a quantitative methodology using tuberculosis in South Africa as a worked example.

    Science.gov (United States)

    Hacking, Damian; Cleary, Susan

    2016-02-09

    Setting priorities is important in health research given the limited resources available for research. Various guidelines exist to assist in the priority setting process; however, priority setting still faces significant challenges such as the clear ranking of identified priorities. The World Health Organization (WHO) proposed a Disability Adjusted Life Year (DALY)-based model to rank priorities by research area (basic, health systems and biomedical) by dividing the DALYs into 'unavertable with existing interventions', 'avertable with improved efficiency' and 'avertable with existing but non-cost-effective interventions', respectively. However, the model has conceptual flaws and no clear methodology for its construction. Therefore, the aim of this paper was to amend the model to address these flaws, and develop a clear methodology by using tuberculosis in South Africa as a worked example. An amended model was constructed to represent total DALYs as the product of DALYs per person and absolute burden of disease. These figures were calculated for all countries from WHO datasets. The lowest figures achieved by any country were assumed to represent 'unavertable with existing interventions' if extrapolated to South Africa. The ratio of 'cost per patient treated' (adjusted for purchasing power and outcome weighted) between South Africa and the best country was used to calculate the 'avertable with improved efficiency section'. Finally, 'avertable with existing but non-cost-effective interventions' was calculated using Disease Control Priorities Project efficacy data, and the ratio between the best intervention and South Africa's current intervention, irrespective of cost. The amended model shows that South Africa has a tuberculosis burden of 1,009,837.3 DALYs; 0.009% of DALYs are unavertable with existing interventions and 96.3% of DALYs could be averted with improvements in efficiency. Of the remaining DALYs, a further 56.9% could be averted with existing but non

  5. A Systematic Methodology for Design of Emulsion Based Chemical Products

    DEFF Research Database (Denmark)

    Mattei, Michele; Kontogeorgis, Georgios; Gani, Rafiqul

    2012-01-01

    a hierarchical approach starting with the identification of the needs to be satisfied by the emulsified product and then building up the formulation by adding one-by-one the different classes of chemicals. A structured database together with dedicated property prediction models and evaluation criteria......A systematic methodology for emulsion based chemical product design is presented. The methodology employs a model-based product synthesis/design stage and a modelexperiment based further refinement and/or validation stage. In this paper only the first stage is presented. The methodology employs...

  6. South Africa's nuclear model: A small and innovative reactor is seen as the model for new electricity plants. The project is nearing the starting blocks

    International Nuclear Information System (INIS)

    Ferreira, Tom

    2004-01-01

    Although nuclear power generation has by far the best safety and environmental record of any technology in general use, it has for many years been unable to make any meaningful inroads into the wall of negative perceptions that have arisen against it. But sentiments are changing rapidly on a global scale. The flare-up of oil prices is a sobering reminder of the volatility in the energy market, the exhaustibility of fossil fuels and the urgent need for stable, reliable, non-polluting sources of electrical power that are indispensable to a modern industrial economy. Today, new types of nuclear plants are prized, and South Africa is moving ahead. The State energy provider, Eskom, is internationally regarded as the leader in the field of the Pebble Bed Modular Reactor (PBMR) technology, a 'new generation' nuclear power plant. A decision on the PBMR project's future is on the near horizon. Should approvals be received in the coming months to proceed to the project's next phase, construction of the PBMR demonstration plant will start in 2006, in which case the reactor will start in 2010 and handed over to the client, Eskom, in 2011. Eskom has conditionally undertaken to purchase the first commercial units. Pebble bed reactors are small, about one-sixth the size of most current nuclear plants. Multiple PBMRs can share a common control center and occupy an area of no more than three football fields. More specifically, the PBMR is a helium-cooled, graphite moderated high temperature reactor (HTR). The concept is based on experience in the UK, United States and particularly Germany where prototype reactors were operated successfully between the late 1960s and 1980s. Although it is not the only high-temperature, gas-cooled nuclear reactor being developed in the world, the South African project is internationally regarded as a front-runner. The South African PBMR includes unique and patented technological innovations which make it particularly competitive. The Chief Executive

  7. Lean start-up

    DEFF Research Database (Denmark)

    Rasmussen, Erik Stavnsager; Tanev, Stoyan

    2016-01-01

    The risk of launching new products and starting new firms is known to be extremely high. The Lean Start-up approach is a way of reducing these risks and enhancing the chances for success by validating the products and services in the market with customers before launching it in full scale. The main...... point is to develop a Minimum Viable Product that can be tested by potential customers and then pivot the idea if necessary around these customer evaluations. This iterative process goes through a number of stages with the purpose of validating the customers’ problems, the suggested solution...

  8. The effects of magmatic redistribution of heat producing elements on the lunar mantle evolution inferred from numerical models that start from various initial states

    Science.gov (United States)

    Ogawa, Masaki

    2018-02-01

    To discuss how redistribution of heat producing elements (HPEs) by magmatism affects the lunar mantle evolution depending on the initial condition, I present two-dimensional numerical models of magmatism in convecting mantle internally heated by incompatible HPEs. Mantle convection occurs beneath a stagnant lithosphere that inhibits recycling of the HPE-enriched crustal materials to the mantle. Magmatism is modeled by a permeable flow of magma generated by decompression melting through matrix. Migrating magma transports heat, mass, and HPEs. When the deep mantle is initially hot with the temperature TD around 1800 K at its base, magmatism starts from the beginning of the calculated history to extract HPEs from the mantle. The mantle is monotonously cooled, and magmatism ceases within 2 Gyr, accordingly. When the deep mantle is initially colder with TD around 1100 K, HPEs stay in the deep mantle for a longer time to let the planet be first heated up and then cooled only slightly. If, in addition, there is an HPE-enriched domain in the shallow mantle at the beginning of the calculation, magma continues ascending to the surface through the domain for more than 3 Gyr. The low TD models fit in with the thermal and magmatic history of the Moon inferred from spacecraft observations, although it is not clear if the models are consistent with the current understanding of the origin of the Moon and its magnetic field. Redistribution of HPEs by magmatism is a crucial factor that must be taken into account in future studies of the evolution of the Moon.

  9. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  10. Home Start Evaluation Study.

    Science.gov (United States)

    High/Scope Educational Research Foundation, Ypsilanti, MI.

    Case studies of eight Home Start programs are given as the third section of an evaluation study. Communities involved are Binghamton, New York; Franklin, North Carolina; Cleveland, Ohio; Harrogate, Tennessee; Houston, Texas; Weslaco, Texas; Millville, Utah; Parkersburg, West Virginia. Although each study varies in format, each describes in detail…

  11. Getting started with Go

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    No, not the Chinese boardgame, the programming language that ironically Google made difficult to google for. You may have heard of Golang, and are wondering whether you should learn it. The answer is that of course you should, and this talk should explain why and point you at the best resources to get started.

  12. Blogs: Getting Started

    Science.gov (United States)

    Dyrud, Marilyn A.; Worley, Rebecca B.; Schultz, Benjamin

    2005-01-01

    Blogs are communication tools, they serve as vehicles to transmit messages. Before deciding to blog, one needs to devise a strategy on how this medium will fit in with his or her communication needs. This will also help later in deciding which features one will need to include in his or her blog. This article discusses ways on how to start and…

  13. Getting started in stereology.

    Science.gov (United States)

    West, Mark J

    2013-04-01

    Stereology involves sampling structural features in sections of tissue with geometrical probes. This article discusses some practical issues that must be dealt with when getting started in stereology, including tissue preparation methods and determining how many tissue sections and probes are needed to make a stereological estimate.

  14. ATLAS starts moving in

    CERN Document Server

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  15. Getting started with UDOO

    CERN Document Server

    Palazzetti, Emanuele

    2015-01-01

    If you are an Android developer who wants to learn how to use UDOO to build Android applications that are capable of interacting with their surrounding environment, then this book is ideal for you. Learning UDOO is the next great step to start building your first real-world prototypes powered by the Android operating system.

  16. Stumbling before the start

    NARCIS (Netherlands)

    Lex Herweijer

    2008-01-01

    Original title: Gestruikeld voor de start. Reducing school dropout rates is high on the education policy agenda in the Netherlands. Too many young people are leaving school without an initial qualification, and in response the government has set itself the target of halving the number of

  17. Theoretical and Methodological issues concerning managers' mental models of competitive industry structures

    OpenAIRE

    Daniels, Kevin; Johnson, Gerry; De Chernatony, Leslie

    1992-01-01

    The methodology traditionally employed by strategic groups theorists categorizes companies on the basis of objective economic variables such as industry supply characteristics. Other lines of research have suggested that this economic approach is limited, and that a more cognitive approach is needed. Strategic groups theory proposes one way in which companies may be categorized, but it is not clear to what extent mangers categorize their competitors. To be presented at the British Acade...

  18. Efficient surrogate construction by combining response surface methodology and reduced order modeling

    OpenAIRE

    Gogu , Christian; Passieux , Jean-Charles

    2012-01-01

    International audience; Response surface methodology is an efficient method for approximating the output of complex, computationally expensive codes. Challenges remain however in decreasing their construction cost as well as in approximating high dimensional output instead of scalar values. We propose a novel approach addressing both these challenges simultaneously for cases where the expensive code solves partial differential equations involving the resolution of a large system of equations,...

  19. Modeling of Bisphenol A (BPA) Removal from Aqueous Solutions by Adsorption Using Response Surface Methodology (RSM)

    OpenAIRE

    Mohammad Ali Zazouli; Farzaneh Veisi; Amir Veisi

    2016-01-01

    Bisphenol A (BPA) is an organic synthetic compound that has many applications in various industries and is known as persistent pollutant. The aim of this research was to evaluate the efficiency of bone ash and banana peel as adsorbents for BPA adsorption from aqueous solution by using Response Surface Methodology. The effects of some variables such as sorbent dose, detention time, solution pH, and BPA concentration on the sorption efficiency was examined. All analyses were carried out accordi...

  20. Modeling and optimization of ammonia treatment by acidic biochar using response surface methodology

    OpenAIRE

    Narong Chaisongkroh; Juntima Chungsiriporn; Charun Bunyakan

    2012-01-01

    Emission of ammonia (NH3) contaminated waste air to the atmosphere without treatment has affected humans andenvironment. Eliminating NH3 in waste air emitted from industries is considered an environmental requisite. In this study,optimization of NH3 adsorption time using acidic rubber wood biochar (RWBs) impregnated with sulfuric acid (H2SO4) wasinvestigated. The central composite design (CCD) in response surface methodology (RSM) by the Design Expert softwarewas used for designing the experi...

  1. The Development of Marine Accidents Human Reliability Assessment Approach: HEART Methodology and MOP Model

    OpenAIRE

    Ludfi Pratiwi Bowo; Wanginingastuti Mutmainnah; Masao Furusho

    2017-01-01

    Humans are one of the important factors in the assessment of accidents, particularly marine accidents. Hence, studies are conducted to assess the contribution of human factors in accidents. There are two generations of Human Reliability Assessment (HRA) that have been developed. Those methodologies are classified by the differences of viewpoints of problem-solving, as the first generation and second generation. The accident analysis can be determined using three techniques of analysis; sequen...

  2. Rapid Dialogue Prototyping Methodology

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Sojka, P.; Rajman, M.; Kopecek, I.; Melichar, M.; Pala, K.

    2004-01-01

    This paper is about the automated production of dialogue models. The goal is to propose and validate a methodology that allows the production of finalized dialogue models (i.e. dialogue models specific for given applications) in a few hours. The solution we propose for such a methodology, called the

  3. Trip Energy Estimation Methodology and Model Based on Real-World Driving Data for Green Routing Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Van Til, Harrison J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-09

    A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any type of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.

  4. Response Surface Methodology

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.

    2014-01-01

    Abstract: This chapter first summarizes Response Surface Methodology (RSM), which started with Box and Wilson’s article in 1951 on RSM for real, non-simulated systems. RSM is a stepwise heuristic that uses first-order polynomials to approximate the response surface locally. An estimated polynomial

  5. Filling the gap between geophysics and geotechnics in landslide process understanding: a data fusion methodology to integrate multi-source information in hydro-mechanical modeling

    Science.gov (United States)

    Bernadie, S.; Gance, J.; Grandjean, G.; Malet, J.

    2013-12-01

    The population increase and the rising issue of climate change impact the long term stability of mountain slopes. So far, it is not yet possible to assess in all cases conditions for failure, reactivation or rapid surges of slopes. The main reason identified by Van Asch et al. (2007) is the excessive conceptualization of the slope in the models. Therefore to improve our forecasting capability, considering local information such as the local slope geometry, the soil material variability, hydrological processes and the presence of fissures are of first importance. Geophysical imaging, combined with geotechnical tests, is an adapted tool to obtain such detailed information. The development of near-surface geophysics in the last three decades encourages the use of multiple geophysical methods for slope investigations. However, fusion of real data is little used in this domain and a gap still exists between the data processed by the geophysicists and the slope hydro-mechanical models developed by the geotechnical engineers. Starting from this statement, we propose a methodological flowchart of multi-source geophysical and geotechnical data integration to construct a slope hydro-mechanical model of a selected profile at the Super-Sauze landslide. Based on data fusion concepts, the methodology aims at integrating various data in order to create a geological and a geotechnical model of the slope profile. The input data consist in seismic and geoelectrical tomographies (that give access to a spatially distributed information on the soil physical state) supplemented by punctual geotechnical tests (dynamic penetration tests). The tomograms and the geotechnical tests are combined into a unique interpreted model characterized by different geotechnical domains. We use the fuzzy logic clustering method in order to take into account the uncertainty coming from each input data. Then an unstructured finite element mesh, adapted to the resolution of the different input data and

  6. Find-rate methodology and resource base estimates of the Hydrocarbon Supply Model (1990 update). Topical report

    International Nuclear Information System (INIS)

    Woods, T.

    1991-02-01

    The Hydrocarbon Supply Model is used to develop long-term trends in Lower-48 gas production and costs. The model utilizes historical find-rate patterns to predict the discovery rate and size distribution of future oil and gas field discoveries. The report documents the methodologies used to quantify historical oil and gas field find-rates and to project those discovery patterns for future drilling. It also explains the theoretical foundations for the find-rate approach. The new field and reserve growth resource base is documented and compared to other published estimates. The report has six sections. Section 1 provides background information and an overview of the model. Sections 2, 3, and 4 describe the theoretical foundations of the model, the databases, and specific techniques used. Section 5 presents the new field resource base by region and depth. Section 6 documents the reserve growth model components

  7. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Science.gov (United States)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  8. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  9. Final Report, Nuclear Energy Research Initiative (NERI) Project: An Innovative Reactor Analysis Methodology Based on a Quasidiffusion Nodal Core Model

    International Nuclear Information System (INIS)

    Anistratov, Dmitriy Y.; Adams, Marvin L.; Palmer, Todd S.; Smith, Kord S.; Clarno, Kevin; Hikaru Hiruta; Razvan Nes

    2003-01-01

    OAK (B204) Final Report, NERI Project: ''An Innovative Reactor Analysis Methodology Based on a Quasidiffusion Nodal Core Model'' The present generation of reactor analysis methods uses few-group nodal diffusion approximations to calculate full-core eigenvalues and power distributions. The cross sections, diffusion coefficients, and discontinuity factors (collectively called ''group constants'') in the nodal diffusion equations are parameterized as functions of many variables, ranging from the obvious (temperature, boron concentration, etc.) to the more obscure (spectral index, moderator temperature history, etc.). These group constants, and their variations as functions of the many variables, are calculated by assembly-level transport codes. The current methodology has two main weaknesses that this project addressed. The first weakness is the diffusion approximation in the full-core calculation; this can be significantly inaccurate at interfaces between different assemblies. This project used the nodal diffusion framework to implement nodal quasidiffusion equations, which can capture transport effects to an arbitrary degree of accuracy. The second weakness is in the parameterization of the group constants; current models do not always perform well, especially at interfaces between unlike assemblies. The project developed a theoretical foundation for parameterization and homogenization models and used that theory to devise improved models. The new models were extended to tabulate information that the nodal quasidiffusion equations can use to capture transport effects in full-core calculations

  10. In what root-zone N concentration does nitrate start to leach significantly? A reasonable answer from modeling Mediterranean field data and closed root-zone experiments

    Science.gov (United States)

    Kurtzman, D.; Kanner, B.; Levy, Y.; Shapira, R. H.; Bar-Tal, A.

    2017-12-01

    Closed-root-zone experiments (e.g. pots, lyzimeters) reveal in many cases a mineral-nitrogen (N) concentration from which the root-N-uptake efficiency reduces significantly and nitrate leaching below the root-zone increases dramatically. A les-direct way to reveal this threshold concentration in agricultural fields is to calibrate N-transport models of the unsaturated zone to nitrate data of the deep samples (under the root-zone) by fitting the threshold concentration of the nitrate-uptake function. Independent research efforts of these two types in light soils where nitrate problems in underlying aquifers are common reviled: 1) that the threshold exists for most crops (filed, vegetables and orchards); 2) nice agreement on the threshold value between the two very different research methodologies; and 3) the threshold lies within 20-50 mg-N/L. Focusing on being below the threshold is a relatively simple aim in the way to maintain intensive agriculture with limited effects on the nitrate concentration in the underlying water resource. Our experience show that in some crops this threshold coincides with the end-of-rise of the N-yield curve (e.g. corn); in this case, it is relatively easy to convince farmers to fertilize below threshold. In other crops, although significant N is lost to leaching the crop can still use higher N concentration to increase yield (e.g. potato).

  11. Getting started with Simulink

    CERN Document Server

    Zamboni, Luca

    2013-01-01

    This practical and easy-to-understand learning tutorial is one big exciting exercise for students and engineers that are always short on their schedules and want to regain some lost time with the help of Simulink.This book is aimed at students and engineers who need a quick start with Simulink. Though it's not required in order to understand how Simulink works, knowledge of physics will help the reader to understand the exercises described.

  12. Getting started with JUCE

    CERN Document Server

    Robinson, Martin

    2013-01-01

    his book is a fast-paced, practical guide full of step-by-step examples which are easy to follow and implement.This book is for programmers with a basic grasp of C++. The examples start at a basic level, making few assumptions beyond fundamental C++ concepts. Those without any experience with C++ should be able to follow and construct the examples, although you may need further support to understand the fundamental concepts.

  13. Getting started with Hazelcast

    CERN Document Server

    Johns, Mat

    2013-01-01

    Written as a step-by-step guide, Getting Started with Hazelcast will teach you all you need to know to make your application data scalable.This book is a great introduction for Java developers, software architects, or developers looking to enable scalable and agile data within their applications. You should have programming knowledge of Java and a general familiarity with concepts like data caching and clustering.

  14. Jump Starting Entrepreneurship

    DEFF Research Database (Denmark)

    Burcharth, Ana; Smith, Pernille; Frederiksen, Lars

    How do laid-off employees become entrepreneurs after receiving a dream start into self-employment? This question is relevant for policy makers and entrepreneurship researchers alike since it raises the possibility of a reverse entrepreneurial opportunity, in which the chance of becoming an entrep......How do laid-off employees become entrepreneurs after receiving a dream start into self-employment? This question is relevant for policy makers and entrepreneurship researchers alike since it raises the possibility of a reverse entrepreneurial opportunity, in which the chance of becoming...... an entrepreneur emerges before the discovery of a profitable opportunity. We empirically examine this question on the unique setting of a corporate entrepreneurship program. In the midst of a corporate crisis, Nokia supported laid-off employees to start their own ventures under favorable conditions. We...... persevered in their endeavors and eventually became comfortable with their new career prospects. We discuss the psychological factors that impact career transition after organizational closure and theorize weather they encourage or discourage entrepreneurship....

  15. Predictive Model and Methodology for Heat Treatment Distortion Final Report CRADA No. TC-298-92

    Energy Technology Data Exchange (ETDEWEB)

    Nikkel, D. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCabe, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-16

    This project was a multi-lab, multi-partner CRADA involving LLNL, Los Alamos National Laboratory, Sandia National Laboratories, Oak Ridge National Laboratory, Martin Marietta Energy Systems and the industrial partner, The National Center of Manufacturing Sciences (NCMS). A number of member companies of NCMS participated including General Motors Corporation, Ford Motor Company, The Torrington Company, Gear Research, the Illinois Institute of Technology Research Institute, and Deformation Control Technology •. LLNL was the lead laboratory for metrology technology used for validation of the computational tool/methodology. LLNL was also the lead laboratory for the development of the software user interface , for the computational tool. This report focuses on the participation of LLNL and NCMS. The purpose of the project was to develop a computational tool/methodology that engineers would use to predict the effects of heat treatment on the _size and shape of industrial parts made of quench hardenable alloys. Initially, the target application of the tool was gears for automotive power trains.

  16. BPLOM: BPM Level-Oriented Methodology for Incremental Business Process Modeling and Code Generation on Mobile Platforms

    Directory of Open Access Journals (Sweden)

    Jaime Solis Martines

    2013-06-01

    Full Text Available The requirements engineering phase is the departure point for the development process of any kind of computer application, it determines the functionality needed in the working scenario of the program. Although this is a crucial point in application development, as incorrect requirement definition leads to costly error appearance in later stages of the development process, application domain experts’ implication remains minor. In order to correct this scenario, business process modeling notations were introduced to favor business expert implication in this phase, but notation complexity prevents this participation to reach its ideal state. Hence, we promote the definition of a level oriented business process methodology, which encourages the adaptation of the modeling notation to the modeling and technical knowledge shown by the expert. This approach reduces the complexity found by domain experts and enables them to model their processes completely with a level of technical detail directly proportional to their knowledge.

  17. Methodology to carry out a sensitivity and uncertainty analysis for cross sections using a coupled model Trace-Parcs

    International Nuclear Information System (INIS)

    Reyes F, M. C.; Del Valle G, E.; Gomez T, A. M.; Sanchez E, V.

    2015-09-01

    A methodology was implemented to carry out a sensitivity and uncertainty analysis for cross sections used in a coupled model for Trace/Parcs in a transient of control rod fall of a BWR-5. A model of the reactor core for the neutronic code Parcs was used, in which the assemblies located in the core are described. Thermo-hydraulic model in Trace was a simple model, where only a component type Chan was designed to represent all the core assemblies, which it was within a single vessel and boundary conditions were established. The thermo-hydraulic part was coupled with the neutron part, first for the steady state and then a transient of control rod fall was carried out for the sensitivity and uncertainty analysis. To carry out the analysis of cross sections used in the coupled model Trace/Parcs during the transient, the Probability Density Functions for 22 parameters selected from the total of neutronic parameters that use Parcs were generated, obtaining 100 different cases for the coupled model Trace/Parcs, each one with a database of different cross sections. All these cases were executed with the coupled model, obtaining in consequence 100 different output files for the transient of control rod fall doing emphasis in the nominal power, for which an uncertainty analysis was realized at the same time generate the band of uncertainty. With this analysis is possible to observe the ranges of results of the elected responses varying the selected uncertainty parameters. The sensitivity analysis complements the uncertainty analysis, identifying the parameter or parameters with more influence on the results and thus focuses on these parameters in order to better understand their effects. Beyond the obtained results, because is not a model with real operation data, the importance of this work is to know the application of the methodology to carry out the sensitivity and uncertainty analyses. (Author)

  18. mRNA translation and protein synthesis: an analysis of different modelling methodologies and a new PBN based approach.

    Science.gov (United States)

    Zhao, Yun-Bo; Krishnan, J

    2014-02-27

    mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may

  19. An Improved Unsupervised Modeling Methodology For Detecting Fraud In Vendor Payment Transactions

    National Research Council Canada - National Science Library

    Rouillard, Gregory

    2003-01-01

    ...) vendor payment transactions through Unsupervised Modeling (cluster analysis) . Clementine Data Mining software is used to construct unsupervised models of vendor payment data using the K-Means, Two Step, and Kohonen algorithms...

  20. Methodological notes on model comparisons and strategy classification: A falsificationist proposition

    OpenAIRE

    Morten Moshagen; Benjamin E. Hilbig

    2011-01-01

    Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic example...

  1. Housing Value Projection Model Related to Educational Planning: The Feasibility of a New Methodology. Final Report.

    Science.gov (United States)

    Helbock, Richard W.; Marker, Gordon

    This study concerns the feasibility of a Markov chain model for projecting housing values and racial mixes. Such projections could be used in planning the layout of school districts to achieve desired levels of socioeconomic heterogeneity. Based upon the concepts and assumptions underlying a Markov chain model, it is concluded that such a model is…

  2. Modelling extrudate expansion in a twin-screw food extrusion cooking process through dimensional analysis methodology

    DEFF Research Database (Denmark)

    Cheng, Hongyuan; Friis, Alan

    2010-01-01

    and temperature, are formed to model the extrusion process from dimensional analysis. The model is evaluated with experimental data for extrusion of whole wheat flour and fish feed. The average deviations of the model correlations are 5.9% and 9% based on experimental data for the whole wheat flour and fish feed...

  3. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation

    Science.gov (United States)

    Tang, Liang; Cheng, Pengle

    2017-01-01

    Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic–plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis. PMID:28746390

  4. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation.

    Directory of Open Access Journals (Sweden)

    Liang Tang

    Full Text Available Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC is extracted with the periodic boundary condition applied on it. An elastic-plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis.

  5. Modeling and optimizing inhibitory activities of Nelumbinis folium extract on xanthine oxidase using response surface methodology.

    Science.gov (United States)

    Sang, Mangmang; Du, Guangyan; Hao, Jia; Wang, Linlin; Liu, Erwei; Zhang, Yi; Wang, Tao; Gao, Xiumei; Han, Lifeng

    2017-05-30

    Xanthine oxidase (XOD), which could oxidize hypoxanthine to xanthine and then to uric acid, is a key enzyme in the pathogenesis of hyperuricemia and also a well-known target for the drug development to treat gout. In our study, the total alkaloids of Nelumbinis folium markedly inhibited XOD activity, with IC 50 value being 3.313μg/mL. UHPLC-Q-TOF-MS and 3D docking analysis indicated that roemerine was a potential active ingredient. A response surface methodology combined with central composite design experiment was further developed and validated for the optimization of the reaction conditions between the total alkaloids of Nelumbinis folium and XOD, which could be considered as a meaningful research for the development of XOD inhibitor rapidly and sensitively. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A Model-Based Methodology for Integrated Design and Operation of Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2015-01-01

    calculation of reactive bubble points. For an energy-efficient design, the driving-forc eapproach (to determine the optimal feed location) for a reactive system has been employed. For both thereactive McCabe-Thiele and driving force method, vapor-liquid equilibrium data are based on elements. Thereactive...... bubble point algorithm is used to compute the reactive vapor-liquid equilibrium data set.The operation of the RDC at the highest driving force and other candidate points is compared through openloop and closed-loop analysis. By application of this methodology it is shown that designing the process atthe...... maximum driving force results in an energy efficient and operable design. It is verified that the reactive distillation design option is less sensitive to the disturbances in the feed at the highest driving force and hasthe inherent ability to reject disturbances....

  7. Getting Started with Netduino

    CERN Document Server

    Walker, Chris

    2012-01-01

    Start building electronics projects with Netduino, the popular open source hardware platform that's captured the imagination of makers and hobbyists worldwide. This easy-to-follow book provides the step-by-step guidance you need to experiment with Netduino and the .NET Micro Framework. Through a set of simple projects, you'll learn how to create electronic gadgets-including networked devices that communicate over TCP/IP. Along the way, hobbyists will pick up the basics of .NET programming, and programmers will discover how to work with electronics and microcontrollers. Follow the projects in

  8. Getting Started with Processing

    CERN Document Server

    Reas, Casey

    2010-01-01

    Learn computer programming the easy way with Processing, a simple language that lets you use code to create drawings, animation, and interactive graphics. Programming courses usually start with theory, but this book lets you jump right into creative and fun projects. It's ideal for anyone who wants to learn basic programming, and serves as a simple introduction to graphics for people with some programming skills. Written by the founders of Processing, this book takes you through the learning process one step at a time to help you grasp core programming concepts. You'll learn how to sketch wi

  9. Getting Started with Roo

    CERN Document Server

    Long, Josh

    2011-01-01

    Spring Roo goes a step beyond the Spring Framework by bringing true Rapid Application Development to Java-just as Grails has done with Groovy. This concise introduction shows you how to build applications with Roo, using the framework's shell as an intelligent and timesaving code-completion tool. It's an ideal RAD tool because Roo does much of the tedious code maintenance. You'll get started by building a simple customer relationship management application, complete with step-by-step instructions and code examples. Learn how to control any part of the application with Roo's opt-in feature, w

  10. Getting started with Arduino

    CERN Document Server

    Banzi, Massimo

    2011-01-01

    Arduino is the open-source electronics prototyping platform that's taken the design and hobbyist world by storm. This thorough introduction, updated for Arduino 1.0, gives you lots of ideas for projects and helps you work with them right away. From getting organized to putting the final touches on your prototype, all the information you need is here! Inside, you'll learn about: Interaction design and physical computingThe Arduino hardware and software development environmentBasics of electricity and electronicsPrototyping on a solderless breadboardDrawing a schematic diagram Getting started

  11. En god start

    DEFF Research Database (Denmark)

    Sievertsen, Hans Henrik

    I Danmark er det muligt at afvige fra reglen om, at barnet skal starte i skole det kalenderår, hvor barnet fylder 6 år. Det gør 10-15 procent af en årgang, mens 80-90 procent af børnene følger normen, og 2-3 procent starter i skole et år tidligere end normen, viser en analyse baseret på børn født i...

  12. Methodological Development of the Probabilistic Model of the Safety Assessment of Hontomin P.D.T

    International Nuclear Information System (INIS)

    Hurtado, A.; Eguilior, S.; Recreo, F.

    2011-01-01

    In the framework of CO 2 Capture and Geological Storage, Risk Analysis plays an important role, because it is an essential requirement of knowledge to make up local, national and supranational definition and planning of carbon injection strategies. This is because each project is at risk of failure. Even from the early stages, it should take into account the possible causes of this risk and propose corrective methods along the process, i.e., managing risk. Proper risk management reduces the negative consequences arising from the project. The main method of reduction or neutralizing of risk is mainly the identification, measurement and evaluation of it, together with the development of decision rules. This report presents the developed methodology for risk analysis and the results of its application. The risk assessment requires determination of the random variables that will influence the functioning of the system. It is very difficult to set up probability distribution of a random variable in the classical sense (objective probability) when a particular event rarely occurred or even it has a incomplete development. In this situation, we have to determine the subjective probability, especially at an early stage of projects, when we have not enough information about the system. This subjective probability is constructed from assessment of experts judgement to estimate the possibility of certain random events could happen depending on geological features of the area of application. The proposed methodology is based on the application of Bayesian Probabilistic Networks for estimating the probability of risk of leakage. These probabilistic networks can define graphically relations of dependence between the variables and joint probability function through a local factorization of probability functions. (Author) 98 refs.

  13. Modelling marine sediment biogeochemistry: Current knowledge gaps, challenges, and some methodological advice for advancement

    DEFF Research Database (Denmark)

    Lessin, Gennadi; Artioli, Yuri; Almroth-Rosell, Elin

    2018-01-01

    -pronged approach for the advancement of benthic and benthic-pelagic modelling, essential for improved understanding, management and prediction of the marine environment. This includes: (A) Development of a traceable and hierarchical framework for benthic-pelagic models, which will facilitate integration among...... models, reduce risk of bias, and clarify model limitations; (B) extended cross-disciplinary approach to promote effective collaboration between modelling and empirical scientists of various backgrounds and better involvement of stakeholders and end-users; (C) a common vocabulary for terminology used...

  14. CONCEPTUAL AND METHODOLOGICAL MISTAKES IN PSYCHOLOGY AND HEALTH: A CASE STUDY ON THE USE AND ABUSE OF STRUCTURAL EQUATION MODELLING

    Directory of Open Access Journals (Sweden)

    Julio Alfonso Piña López

    2016-09-01

    Full Text Available In this article, a research paper is analysed, which was justified based on the theory of developmental psychopathology, the protective factors, self-regulation, resilience, and quality of life among individuals who lived with type 2 diabetes and hypertension. Structural equation modelling (SEM was used for the data analysis. Although the authors conclude that the data are adequate to the theory tested, they commit errors of logic, concept, methodology and interpretation which, taken together, demonstrate a flagrant rupture between the theory and the data.

  15. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  16. Modeling of the effect of freezer conditions on the hardness of ice cream using response surface methodology.

    Science.gov (United States)

    Inoue, K; Ochi, H; Habara, K; Taketsuka, M; Saito, H; Ichihashi, N; Iwatsuki, K

    2009-12-01

    The effect of conventional continuous freezer parameters [mix flow (L/h), overrun (%), drawing temperature ( degrees C), cylinder pressure (kPa), and dasher speed (rpm)] on the hardness of ice cream under varying measured temperatures (-5, -10, and -15 degrees C) was investigated systematically using response surface methodology (central composite face-centered design), and the relationships were expressed as statistical models. The range (maximum and minimum values) of each freezer parameter was set according to the actual capability of the conventional freezer and applicability to the manufacturing process. Hardness was measured using a penetrometer. These models showed that overrun and drawing temperature had significant effects on hardness. The models can be used to optimize freezer conditions to make ice cream of the least possible hardness under the highest overrun (120%) and a drawing temperature of approximately -5.5 degrees C (slightly warmer than the lowest drawing temperature of -6.5 degrees C) within the range of this study. With reference to the structural elements of the ice cream, we suggest that the volume of overrun and ice crystal content, ice crystal size, and fat globule destabilization affect the hardness of ice cream. In addition, the combination of a simple instrumental parameter and response surface methodology allows us to show the relation between freezer conditions and one of the most important properties-hardness-visually and quantitatively on the practical level.

  17. Towards A Model-based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    Directory of Open Access Journals (Sweden)

    Gautam Biswas

    2012-12-01

    Full Text Available This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter approach in conjunction with an empirical state-based degradation model to predict the degradation of capacitor parameters through the life of the capacitor. Electrolytic capacitors are important components of systems that range from power supplies on critical avion- ics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their critical role in the system, they are good candidates for component level prognostics and health management. Prognostics provides a way to assess remain- ing useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. This paper proposes and empirical degradation model and discusses experimental results for an accelerated aging test performed on a set of identical capacitors subjected to electrical stress. The data forms the basis for developing the Kalman-filter based remaining life prediction algorithm.

  18. Non-LTE modeling of the radiative properties of high-Z plasma using linear response methodology

    Science.gov (United States)

    Foord, Mark; Harte, Judy; Scott, Howard

    2017-10-01

    Non-local thermodynamic equilibrium (NLTE) atomic processes play a key role in the radiation flow and energetics in highly ionized high temperature plasma encountered in inertial confinement fusion (ICF) and astrophysical applications. Modeling complex high-Z atomic systems, such as gold used in ICF hohlraums, is particularly challenging given the complexity and intractable number of atomic states involved. Practical considerations, i.e. speed and memory, in large radiation-hydrodynamic simulations further limit model complexity. We present here a methodology for utilizing tabulated NLTE radiative and EOS properties for use in our radiation-hydrodynamic codes. This approach uses tabulated data, previously calculated with complex atomic models, modified to include a general non-Planckian radiation field using a linear response methodology. This approach extends near-LTE response method to conditions far from LTE. Comparisons of this tabular method with in-line NLTE simulations of a laser heated 1-D hohlraum will be presented, which show good agreement in the time-evolution of the plasma conditions. This work was performed under the auspices of the U.S. Dept. of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. New statistical methodology, mathematical models, and data bases relevant to the assessment of health impacts of energy technologies

    International Nuclear Information System (INIS)

    Ginevan, M.E.; Collins, J.J.; Brown, C.D.; Carnes, B.A.; Curtiss, J.B.; Devine, N.

    1981-01-01

    The present research develops new statistical methodology, mathematical models, and data bases of relevance to the assessment of health impacts of energy technologies, and uses these to identify, quantify, and pedict adverse health effects of energy related pollutants. Efforts are in five related areas including: (1) evaluation and development of statistical procedures for the analysis of death rate data, disease incidence data, and large scale data sets; (2) development of dose response and demographic models useful in the prediction of the health effects of energy technologies; (3) application of our method and models to analyses of the health risks of energy production; (4) a reanalysis of the Tri-State leukemia survey data, focusing on the relationship between myelogenous leukemia risk and diagnostic x-ray exposure; and (5) investigation of human birth weights as a possible early warning system for the effects of environmental pollution

  20. Analysis of methodology for designing education and training model for professional development in the field of radiation technology

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kon Wuk; Lee, Jae Hun; Park, Tai Jin; Song, Myung Jae [Korean Association for Radiation Application, Seoul (Korea, Republic of)

    2015-02-15

    The domestic Radiation Technology is integrated into and utilized in various areas and is closely related to the industrial growth in Korea. The domestic use of radiation and RI (Radioisotope) increases in quantity every year, however the level of technology is poor when compared to other developed countries. Manpower training is essential for the development of Radiation Technology. Therefore, this study aimed to propose a methodology for designing systemic education and training model in the field of measurement and analysis of radiation. A survey was conducted to design education and training model and the training program for measurement and analysis of radiation was developed based on the survey results. The education and training program designed in this study will be utilized as a model for evaluating the professional development and effective recruitment of the professional workforce, and can be further applied to other radiation-related fields.

  1. Analysis of methodology for designing education and training model for professional development in the field of radiation technology

    International Nuclear Information System (INIS)

    Kim, Kon Wuk; Lee, Jae Hun; Park, Tai Jin; Song, Myung Jae

    2015-01-01

    The domestic Radiation Technology is integrated into and utilized in various areas and is closely related to the industrial growth in Korea. The domestic use of radiation and RI (Radioisotope) increases in quantity every year, however the level of technology is poor when compared to other developed countries. Manpower training is essential for the development of Radiation Technology. Therefore, this study aimed to propose a methodology for designing systemic education and training model in the field of measurement and analysis of radiation. A survey was conducted to design education and training model and the training program for measurement and analysis of radiation was developed based on the survey results. The education and training program designed in this study will be utilized as a model for evaluating the professional development and effective recruitment of the professional workforce, and can be further applied to other radiation-related fields

  2. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Riley, Cameron; Hobbs, William

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  3. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Hobbs, William; Riley, Cameron

    2016-11-21

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  4. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Hobbs, William; Riley, Cameron

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  5. Modeling of the re-starting of waxy crude oil flows in pipelines; Modelisation du redemarrage des ecoulements de bruts paraffiniques dans les conduites petrolieres

    Energy Technology Data Exchange (ETDEWEB)

    Vinay, G.

    2005-11-15

    Pipelining crude oils that contain large proportions of paraffins can cause many specific difficulties. These oils, known as waxy crude oils, usually exhibit high 'pour point', where this temperature is higher than the external temperature conditions surrounding the pipeline. During the shutdown, since the temperature decreases in the pipeline, the gel-like structure builds up and the main difficulty concerns the issue of restarting. This PhD attempts to improve waxy crude oil behaviour understanding thanks to experiment, modelling and numerical simulation in order to predict more accurately time and pressure required to restart the flow. Using various contributions to the literature, waxy crude oils are described as viscoplastic, thixotropic and compressible fluid. Strong temperature history dependence plays a prevailing role in the whole shutdown and restart process. Thus, waxy crude oils under flowing conditions correspond to the non-isothermal flow of a viscoplastic material with temperature-dependent rheological properties. Besides, the restart of a waxy crude oil is simulated by the isothermal transient flow of a weakly compressible thixotropic fluid in axisymmetric pipe geometry. We retain the Houska model to describe the thixotropic/viscoplastic feature of the fluid and compressibility is introduced in the continuity equation. The viscoplastic constitutive equation is involved using an augmented Lagrangian method and the resulting equivalent saddle-point problem is solved thanks to an Uzawa-like algorithm. Governing equations are discretized using a Finite Volume method and the convection terms are treated thanks to a TVD (Total Variation Diminishing) scheme. The Lagrangian functional technique usually used for incompressible viscoplastic flows, is adapted to compressible situations. Several numerical results attest the good convergence properties of the proposed transient algorithm. The non-isothermal results highlight the strong sensitivity of

  6. Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems

    Science.gov (United States)

    2015-12-01

    engineering (SE) and systems architecture (SA) methods during the model development process ( MDP ). A MDP is used to ensure that the models are...validated and represent the real world as accurately as possible. There are several varieties of MDPs presented in literature, but all share the...early in the MDP for face validation. A well-constructed CoM supports model exploration of NOS when operational validation is not feasible. This

  7. Quantitative Developments of Biomolecular Databases, Measurement Methodology, and Comprehensive Transport Models for Bioanalytical Microfluidics

    Science.gov (United States)

    2006-10-01

    chemistry models (beads and surfaces)[38] M11. Biochemistry database integrated with electrochemistry M12. Hydrogel models for surface biochemistry[30] M13 ...bacteria and λ- phage DNA. This device relies on the balance between electroosmotic flow and DEP force on suspended particles. In another application...electrochemistry M12. Hydrogel models for surface biochemistry[30] M13 . Least square-based engine for extraction of kinetic coefficients[38] M14. Rapid ANN

  8. Development of trip coverage analysis methodology - CATHENA trip coverage analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jong Ho; Ohn, M. Y.; Cho, C. H.; Huh, J. Y.; Na, Y. H.; Lee, S. Y.; Kim, B. G.; Kim, H. H.; Kim, S. W.; Bae, C. J.; Kim, T. M.; Kim, S. R.; Han, B. S.; Moon, B. J.; Oh, M. T. [Korea Power Engineering Co., Yongin (Korea)

    2001-05-01

    This report describes the CATHENA model for trip coverage analysis. This model is prepared based on the Wolsong 2 design data and consist of primary heat transport system, shutdown system, steam and feedwater system, reactor regulating system, heat transport pressure and inventory control system, and steam generator level and pressure control system. The new features and modified parts from the Wolsong 2 CATHENA LOCA Model required for trip coverage analysis is described. this model is tested by simulation of steady state at 100 % FP and at several low powers. Also, the cases of power rundown and power runup are tested. 17 refs., 124 figs., 19 tabs. (Author)

  9. When to Start Antiretroviral Therapy in Children Aged 2–5 Years: A Collaborative Causal Modelling Analysis of Cohort Studies from Southern Africa

    Science.gov (United States)

    Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann

    2013-01-01

    Background There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2–5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS–Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2–5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4 percentage (CD4%) <25%. Methods and Findings ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm3 (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1–6.5) (no ART) to 2.1% (95% CI: 1.3%–3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%–3.5%) and 2.2% (95% CI: 1.4%–3.5%) after 3 y, respectively. The

  10. When to start antiretroviral therapy in children aged 2-5 years: a collaborative causal modelling analysis of cohort studies from southern Africa.

    Science.gov (United States)

    Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann

    2013-11-01

    There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2-5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS-Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2-5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm(3) or CD4 percentage (CD4%) <25%. ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm(3) (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1-6.5) (no ART) to 2.1% (95% CI: 1.3%-3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm(3) or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%-3.5%) and 2.2% (95% CI: 1.4%-3.5%) after 3 y, respectively. The analysis was limited by loss to follow

  11. When to start antiretroviral therapy in children aged 2-5 years: a collaborative causal modelling analysis of cohort studies from southern Africa.

    Directory of Open Access Journals (Sweden)

    Michael Schomaker

    2013-11-01

    Full Text Available There is limited evidence on the optimal timing of antiretroviral therapy (ART initiation in children 2-5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS-Southern Africa (IeDEA-SA collaborative dataset to determine the difference in mortality when starting ART in children aged 2-5 y immediately (irrespective of CD4 criteria, as recommended in the World Health Organization (WHO 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm(3 or CD4 percentage (CD4% <25%.ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile age at first visit of 2,934 children (51% male included in the analysis was 3.3 y (2.6; 4.1, with a median (first; third quartile CD4 count of 592 cells/mm(3 (356; 895 and median (first; third quartile CD4% of 16% (10%; 23%. The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1-6.5 (no ART to 2.1% (95% CI: 1.3%-3.5% (ART irrespective of CD4 value. Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm(3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%-3.5% and 2.2% (95% CI: 1.4%-3.5% after 3 y, respectively. The analysis was limited by loss to follow-up and

  12. Numerical modeling of the groundwater contaminant transport for the Lake Karachai Area: The methodological approach and the basic two- dimensional regional model

    International Nuclear Information System (INIS)

    Petrov, A.V.; Samsonova, L.M.; Vasil'kova, N.A.; Zinin, A.I.; Zinina, G.A.

    1994-06-01

    Methodological aspects of the numerical modeling of the groundwater contaminant transport for the Lake Karachay area are discussed. Main features of conditions of the task are the high grade of non-uniformity of the aquifer in the fractured rock massif and the high density of the waste solutions, and also the high volume of the input data: both on the part of parameters of the aquifer (number of pump tests) and on the part of observations of functions of processes (long-time observations by the monitoring well grid). The modeling process for constructing the two dimensional regional model is described, and this model is presented as the basic model for subsequent full three-dimensional modeling in sub-areas of interest. Original powerful mathematical apparatus and computer codes for finite-difference numerical modeling are used

  13. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    Science.gov (United States)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  14. Biokinetics and dosimetry of a hybrid formulation of 9mTc-BN and 99mTc-RGD2 starting from optic images in a murine model

    International Nuclear Information System (INIS)

    Cornejo A, L. G.

    2015-01-01

    This work has the purpose of evaluate the biokinetics and absorbed dose of radiation of hybrid formulation 99m Tc-BN / 99m Tc-RGD 2 in a murine model by optical imaging techniques using the multimodal preclinical in vivo image system Xtreme. The used method were the 99m Tc-BN, 99m Tc-RGD 2 and 99m Tc-BN/ 99m Tc-RGD 2 formulas, with specific recognition for GRPr and the integrin s α(v)β(3) and α(v)β(5) respectively, was injected in the vein tail of three nude mousses with induce breast cancer tumors (cell line T-47-D), by the preclinical multimodal imaging system Xtreme (Bruker), optical images in different times was acquired (5, 10, 20 min, 2 and 24 h), using Images Processing Toolbox of MATLAB these images was transform from RGB format to gray scales and sectioned in five independent images corresponding to heart, kidneys, bladder and tumor areas. The intensity of each images was computed in counts per pixel, then those intensities was corrected for background, attenuation and scattering, using different factors for each phenomena previously calculated. Finally the activity values quantified vs time was fitted into a biokinetic model to obtain the disintegrations number and cumulate activities in each organ. With these data the radiation absorbed dose were calculated using MIRD methodology. Results: The number of disintegration and absorbed dose calculated in MBq h/MBq and mGy/MBq, of injected mouse with the 99m Tc-BN/ 99m Tc-RGD 2 formulation, was: 0.035 ± 0.65 E-02, 0.25 x 10 -5 ± 0.46 E-07; 0.393 ± 0.51 E-1, 2.85 E-05 ± 3.7 E-06; 0.306 ± 0.21 E-01, 2.11 E-05 ± 1.45 E-06 and 0.151 ± 0.19 E-01, 1.09 E-05 ± 1.42 E-06 , in heart, kidneys, bladder and tumor, respectively. The number of disintegration obtained in kidneys is comparable to those reported for Trinidad B. 2014 Conclusions: Our results demonstrated that using optical images and a code for image analyses development in MATLAB, could achieve comparable quantitative results as the conventional

  15. Developing a Validation Methodology for Expert-Informed Bayesian Network Models Supporting Nuclear Nonproliferation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    White, Amanda M.; Gastelum, Zoe N.; Whitney, Paul D.

    2014-05-13

    Under the auspices of Pacific Northwest National Laboratory’s Signature Discovery Initiative (SDI), the research team developed a series of Bayesian Network models to assess multi-source signatures of nuclear programs. A Bayesian network is a mathematical model that can be used to marshal evidence to assess competing hypotheses. The purpose of the models was to allow non-expert analysts to benefit from the use of expert-informed mathematical models to assess nuclear programs, because such assessments require significant technical expertise ranging from the nuclear fuel cycle, construction and engineering, imagery analysis, and so forth. One such model developed under this research was aimed at assessing the consistency of open-source information about a nuclear facility with the facility’s declared use. The model incorporates factors such as location, security and safety features among others identified by subject matter experts as crucial to their assessments. The model includes key features, observables and their relationships. The model also provides documentation, which serves as training materials for the non-experts.

  16. Towards a Model and Methodology for Assessing Student Learning Outcomes and Satisfaction

    Science.gov (United States)

    Duque, Lola C.; Weeks, John R.

    2010-01-01

    Purpose: The purpose of this paper is threefold: first, to introduce a conceptual model for assessing undergraduate student learning outcomes and satisfaction that involves concepts drawn from the services marketing and assessment literatures; second, to illustrate the utility of the model as implemented in an academic department (geography)…

  17. The Plumbing of Land Surface Models: Is Poor Performance a Result of Methodology or Data Quality?

    Science.gov (United States)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.; Or, Dani; Best, Martin J.; Johnson, Helen R.; Balsamo, Gianpaolo; Boone, Aaron; Cuntz, Matthais; Decharme, Bertrand; hide

    2016-01-01

    The PALS Land sUrface Model Benchmarking Evaluation pRoject (PLUMBER) illustrated the value of prescribing a priori performance targets in model intercomparisons. It showed that the performance of turbulent energy flux predictions from different land surface models, at a broad range of flux tower sites using common evaluation metrics, was on average worse than relatively simple empirical models. For sensible heat fluxes, all land surface models were outperformed by a linear regression against downward shortwave radiation. For latent heat flux, all land surface models were outperformed by a regression against downward shortwave, surface air temperature and relative humidity. These results are explored here in greater detail and possible causes are investigated. We examine whether particular metrics or sites unduly influence the collated results, whether results change according to time-scale aggregation and whether a lack of energy conservation in fluxtower data gives the empirical models an unfair advantage in the intercomparison. We demonstrate that energy conservation in the observational data is not responsible for these results. We also show that the partitioning between sensible and latent heat fluxes in LSMs, rather than the calculation of available energy, is the cause of the original findings. Finally, we present evidence suggesting that the nature of this partitioning problem is likely shared among all contributing LSMs. While we do not find a single candidate explanation forwhy land surface models perform poorly relative to empirical benchmarks in PLUMBER, we do exclude multiple possible explanations and provide guidance on where future research should focus.

  18. Distribution of the Object Oriented Databases. A Viewpoint of the MVDB Model's Methodology and Architecture

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2008-01-01

    Full Text Available In databases, much work has been done towards extending models with advanced tools such as view technology, schema evolution support, multiple classification, role modeling and viewpoints. Over the past years, most of the research dealing with the object multiple representation and evolution has proposed to enrich the monolithic vision of the classical object approach in which an object belongs to one hierarchy class. In particular, the integration of the viewpoint mechanism to the conventional object-oriented data model gives it flexibility and allows one to improve the modeling power of objects. The viewpoint paradigm refers to the multiple descriptions, the distribution, and the evolution of object. Also, it can be an undeniable contribution for a distributed design of complex databases. The motivation of this paper is to define an object data model integrating viewpoints in databases and to present a federated database architecture integrating multiple viewpoint sources following a local-as-extended-view data integration approach.

  19. Study on uncertainty evaluation methodology related to hydrological parameter of regional groundwater flow analysis model

    International Nuclear Information System (INIS)

    Sakai, Ryutaro; Munakata, Masahiro; Ohoka, Masao; Kameya, Hiroshi

    2009-11-01

    In the safety assessment for a geological disposal of radioactive waste, it is important to develop a methodology for long-term estimation of regional groundwater flow from data acquisition to numerical analyses. In the uncertainties associated with estimation of regional groundwater flow, there are the one that concerns parameters and the one that concerns the hydrologeological evolution. The uncertainties of parameters include measurement errors and their heterogeneity. The authors discussed the uncertainties of hydraulic conductivity as a significant parameter for regional groundwater flow analysis. This study suggests that hydraulic conductivities of rock mass are controlled by rock characteristics such as fractures, porosity and test conditions such as hydraulic gradient, water quality, water temperature and that there exists variations more than ten times in hydraulic conductivity by difference due to test conditions such as hydraulic gradient or due to rock type variations such as rock fractures, porosity. In addition this study demonstrated that confining pressure change caused by uplift and subsidence and change of hydraulic gradient under the long-term evolution of hydrogeological environment could possibly produce variations more than ten times of magnitude in hydraulic conductivity. It was also shown that the effect of water quality change on hydraulic conductivity was not negligible and that the replacement of fresh water and saline water caused by sea level change could induce 0.6 times in current hydraulic conductivities in case of Horonobe site. (author)

  20. A MODEL OF ANALYSIS IN ANALYTICAL METHODOLOGY FOR BIOPHARMACEUTICAL QUALITY CONTROL.

    Science.gov (United States)

    Andrade, Cleyton; de la O Herrera, Miguel; Lemes, Elezer

    2018-02-14

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA (rcDNA). To determine small amounts of DNA (around 100pg) that may be in a biologically-derived drug substance, an analytical method should be sensitive, robust, reliable and accurate. In principle, three techniques have the ability to measure rcDNA: radioactive dot-blot a type of Hybridization; Threshold and quantitative Polymerase Chain Reaction (qPCR). Quality Risk Management (QRM) is a systematic process for evaluating, controlling and reporting of risks which may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates by QRM, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool Hazard Analysis and Critical Control Points (HACCP). HACCP provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we concluded that the radioactive dot-blot assay has the largest number of critical control points (CCP), followed by qPCR and Threshold. From the analysis of hazards (i.e. points of method failure) and the associated method procedure CCP, we concluded that the analytical methodology with the lowest risk for performance failure for rcDNA testing is the qPCR. Copyright © 2018, Parenteral Drug Association.

  1. The Healthy Start project

    DEFF Research Database (Denmark)

    Olsen, Nanna J; Buch-Andersen, Tine; Händel, Mina N

    2012-01-01

    , and to intervene not only by improving diet and physical activity, but also reduce stress and improve sleep quality and quantity. METHODS: Based on information from the Danish national birth registry and administrative birth forms, children were selected based on having either high birth weight, a mother who......-going, but it is estimated that 394 children will be included. The intervention took place over on average 11/2 year, between 2009 and 2011, and consisted of optional individual guidance in optimizing diet and physical activity habits, reducing chronic stress and stressful events and improving sleep quality and quantity....... The intervention also included participation in cooking classes and play arrangements. Information on dietary intake, meal habits, physical activity, sleep habits, and overall stress level was obtained by 4-7 day questionnaire dairies and objective measurements. DISCUSSION: If the Healthy Start project...

  2. Understanding leachate flow in municipal solid waste landfills by combining time-lapse ERT and subsurface flow modelling - Part II: Constraint methodology of hydrodynamic models.

    Science.gov (United States)

    Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R

    2016-09-01

    Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Optimization of Multi-Omic Genome-Scale Models: Methodologies, Hands-on Tutorial, and Perspectives.

    Science.gov (United States)

    Vijayakumar, Supreeta; Conway, Max; Lió, Pietro; Angione, Claudio

    2018-01-01

    Genome-scale metabolic models are valuable tools for assessing the metabolic potential of living organisms. Being downstream of gene expression, metabolism is increasingly being used as an indicator of the phenotypic outcome for drugs and therapies. We here present a review of the principal methods used for constraint-based modelling in systems biology, and explore how the integration of multi-omic data can be used to improve phenotypic predictions of genome-scale metabolic models. We believe that the large-scale comparison of the metabolic response of an organism to different environmental conditions will be an important challenge for genome-scale models. Therefore, within the context of multi-omic methods, we describe a tutorial for multi-objective optimization using the metabolic and transcriptomics adaptation estimator (METRADE), implemented in MATLAB. METRADE uses microarray and codon usage data to model bacterial metabolic response to environmental conditions (e.g., antibiotics, temperatures, heat shock). Finally, we discuss key considerations for the integration of multi-omic networks into metabolic models, towards automatically extracting knowledge from such models.

  4. EFFICIENCY OF DIFFERENT METHODOLOGICAL MODELS OF SWIMMING PRACTICE WITH PRE-SCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    Dragan Krivokapić

    2006-06-01

    Full Text Available On the sample of 68 preschool boys and girls aged five to six years two models of swimming teaching realised with purpose to research their efficacity. lt was finded before that they were nonswimers. Testers deviated in two similar groups by basic motor and cognitive abilities. First model of swim teaching, signed as time deviated learning, was realised at the cloused swimming pool with 36 testers which exercised twice of week during three months. Second model of swim teaching, signed as time concentrated learning, was realised as a two-week course with 32 testers which exercised at the sea side. Two control assessment of swimming level knowledge were made during experimental process, and a final assesment was made at the and of the experiment Scaling tehnicque was used for assesing. An analysis of the obtained data resulted in the following conclusions: the both models of swim teaching were efficacity and majority of children accepted swim knovvledge. Results of time concentrated model learning were statistical significance beter then time deviated learning only in the control assesments, but the svviming level knowledge was not different in the final assment. That conclusion shows that model of time concentrated learning has more efficacity in the begining, and model of time deviated learning in the later period of teaching

  5. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  6. Development in methodologies for modelling of human and ecotoxic impacts in LCA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Huijbregts, Mark; Jolliet, Olivier

    2009-01-01

    harmonized in an iterative way removing those identified differences which were unintentional or unnecessary and thereby reducing the inter-model variation. A parsimonious (as simple as possible but as complex as needed) and transparent consensus model, USEtox™, was created containing only the most....... The USEtox™ model has been used to calculate characterization factors for several thousand substances and is currently under review with the intention that it shall form the basis of the recommendations from the UNEP-SETAC Life Cycle Initiative regarding characterization of toxic impacts in Life Cycle...

  7. Towards a generic, reliable CFD modelling methodology for waste-fired grate boilers

    DEFF Research Database (Denmark)

    Rajh, Boštjan; Yin, Chungen; Samec, Niko

    the appropriate inlet boundary condition for the freeboard 3D CFD simulation. Additionally, a refined WSGGM (weighted sum of gray gases model) of greater accuracy, completeness and applicability is proposed and implemented into the CFD model via user defined functions (UDF) to better address the impacts......Computational Fluid Dynamics (CFD) is increasingly used in industry for detailed understanding of the combustion process and for appropriate design and optimization of Waste–to–Energy (WtE) plants. In this paper, CFD modelling of waste wood combustion in a 13 MW grate-fired boiler in a WtE plant...

  8. Hyperalgesia in a human model of acute inflammatory pain: a methodological study

    DEFF Research Database (Denmark)

    Pedersen, J L; Kehlet, H

    1998-01-01

    sensitive to heat pain on their left side (P ... measurements themselves evoked hyperalgesia to heat and mechanical stimuli on the arm, but only to mechanical stimuli on the legs. including secondary hyperalgesia. Hyperalgesia evoked by the measurements was significantly less intense than that induced by injury. Habituation to the painful stimuli...... was demonstrated by significantly higher pain thresholds and lower pain responses on the second and third day of the study. The burn model is a sensitive psychophysical model of acute inflammatory pain, when cross-over designs and within-day comparisons are used, and the model is suitable for double-blind, placebo...

  9. Study on the methodology for hydrogeological site descriptive modelling by discrete fracture networks

    International Nuclear Information System (INIS)

    Tanaka, Tatsuya; Ando, Kenichi; Hashimoto, Shuuji; Saegusa, Hiromitsu; Takeuchi, Shinji; Amano, Kenji

    2007-01-01

    This study aims to establish comprehensive techniques for site descriptive modelling considering the hydraulic heterogeneity due to the Water Conducting Features in fractured rocks. The WCFs was defined by the interpretation and integration of geological and hydrogeological data obtained from the deep borehole investigation campaign in the Mizunami URL project and Regional Hydrogeological Study. As a result of surface based investigation phase, the block-scale hydrogeological descriptive model was generated using hydraulic discrete fracture networks. Uncertainties and remaining issues associated with the assumption in interpreting the data and its modelling were addressed in a systematic way. (author)

  10. Development and application of compact models of packages based on DELPHI methodology

    CERN Document Server

    Parry, J; Shidore, S

    1997-01-01

    The accurate prediction of the temperatures of critical electronic parts at the package- board- and system-level is seriously hampered by the lack of reliable, standardised input data for the characterisation of the thermal $9 behaviour of these parts. The recently completed collaborative European project, DELPHI has been concerned with the creation and experimental validation of thermal models (both detailed and compact) of a range of electronic parts, $9 including mono-chip packages. This paper demonstrates the reliable performance of thermal compact models in a range of applications, by comparison with the detailed models from which they were derived. (31 refs).

  11. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.; Fuentes-Moreno, J. A.; Muljadi, Eduard; Gomez-Lazaro, E.

    2015-09-14

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  12. A GIS-based methodology for drought vulnerability modelling: application at the region of el Hodna, central Algeria

    Directory of Open Access Journals (Sweden)

    Meriem Boultif

    2017-06-01

    Full Text Available Boultif, M. and Benmessaoud, H. 2017. A GIS-based methodology for drought vulnerability modelling: application at the region of el Hodna, central Algeria. Lebanese Science Journal, 18(1: 53-72. Desert covers 80% of the Algerian territory, while the remaining area is covered by Mediterranean forests and arid climate steppe that are characterized by severe vulnerability to different stresses such as drought, especially with the increase of nefarious human impact and the overuse of natural resources. The objective of this study is to analyse and assess drought vulnerability in the area of El Hodna in central Algeria. The methodology was based on the use of GIS tools and multi-criteria analysis (Analytical hierarchy process to develop a model of vulnerability mapping. The results showed that 35.67% of the study area was very vulnerable, 32.77% in fragile situation, 19.72% are potentially vulnerable, and only 11.83% of the surface is not affected. The drought-vulnerability map provides a basis from which it will be possible to prevent and prepare for a drought response.

  13. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  14. A methodology proposal for collaborative business process elaboration using a model-driven approach

    Science.gov (United States)

    Mu, Wenxin; Bénaben, Frédérick; Pingaud, Hervé

    2015-05-01

    Business process management (BPM) principles are commonly used to improve processes within an organisation. But they can equally be applied to supporting the design of an Information System (IS). In a collaborative situation involving several partners, this type of BPM approach may be useful to support the design of a Mediation Information System (MIS), which would ensure interoperability between the partners' ISs (which are assumed to be service oriented). To achieve this objective, the first main task is to build a collaborative business process cartography. The aim of this article is to present a method for bringing together collaborative information and elaborating collaborative business processes from the information gathered (by using a collaborative situation framework, an organisational model, an informational model, a functional model and a metamodel and by using model transformation rules).

  15. a Methodology to Adapt Photogrammetric Models to Virtual Reality for Oculus Gear VR

    Science.gov (United States)

    Colmenero Fdez, A.

    2017-11-01

    In this paper, we will expose the process of adapting a high resolution model (laser and photogrammetry) into a virtual reality application for mobile phones. It is a virtual archeology project carried out on the site of Lugo's Mitreo, Spain.

  16. The methodology of choice Cam-Clay model parameters for loess subsoil

    Science.gov (United States)

    Nepelski, Krzysztof; Błazik-Borowa, Ewa

    2018-01-01

    The paper deals with the calibration method of FEM subsoil model described by the constitutive Cam-Clay model. The four-storey residential building and solid substrate are modelled. Identification of the substrate is made using research drilling, CPT static tests, DMT Marchetti dilatometer, and laboratory tests. Latter are performed on the intact soil specimens which are taken from the wide planning trench at the depth of foundation. The real building settlements was measured as the vertical displacement of benchmarks. These measurements were carried out periodically during the erection of the building and its operation. Initially, the Cam Clay model parameters were determined on the basis of the laboratory tests, and later, they were corrected by taking into consideration numerical analyses results (whole building and its parts) and real building settlements.

  17. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  18. Efficient uncertainty quantification methodologies for high-dimensional climate land models

    Energy Technology Data Exchange (ETDEWEB)

    Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-11-01

    In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.

  19. Development Customer Knowledge Management (Ckm) Models in Purbalingga Hospitality Using Soft Systems Methodology (Ssm)

    OpenAIRE

    Chasanah, Nur; Sensuse, Dana Indra; Lusa, Jonathan Sofian

    2014-01-01

    Development of the tourism sector is part of the national development efforts that are being implemented in Indonesia. This research was conducted with the customer to make an overview of knowledge management models to address the existing problems in hospitality in the hospitality Purbalingga as supporting tourism Purbalingga. The model depicts a series of problem-solving activities that result in the hospitality, especially in Purbalingga. This research was action research with methods of S...

  20. BWR MARK I pressure suppression pool mixing and stratification analysis using GOTHIC lumped parameter modeling methodology

    International Nuclear Information System (INIS)

    Ozdemir, Ozkan Emre; George, Thomas L.

    2015-01-01

    As a part of the GOTHIC (GOTHIC incorporates technology developed for the electric power industry under the sponsorship of EPRI.) Fukushima Technical Evaluation project (EPRI, 2014a, b, 2015), GOTHIC (EPRI, 2014c) has been benchmarked against test data for pool stratification (EPRI, 2014a, b, Ozdemir and George, 2013). These tests confirmed GOTHIC’s ability to simulate pool mixing and stratification under a variety of anticipated suppression pool operating conditions. The multidimensional modeling requires long simulation times for events that may occur over a period of hours or days. For these scenarios a lumped model of the pressure suppression chamber is desirable to maintain reasonable simulation times. However, a lumped model for the pool is not able to predict the effects of pool stratification that can influence the overall containment response. The main objective of this work is on the development of a correlation that can be used to estimate pool mixing and stratification effects in a lumped modeling approach. A simplified lumped GOTHIC model that includes a two zone model for the suppression pool with controlled circulation between the upper and lower zones was constructed. A pump and associated flow connections are included to provide mixing between the upper and lower pool volumes. Using numerically generated data from a multidimensional GOTHIC model for the suppression pool, a correlation was developed for the mixing rate between the upper and lower pool volumes in a two-zone, lumped model. The mixing rate depends on the pool subcooling, the steam injection rate and the injection depth