WorldWideScience

Sample records for modeling approach results

  1. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  2. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  3. CFD approach to modelling, hydrodynamic analysis and motion characteristics of a laboratory underwater glider with experimental results

    Directory of Open Access Journals (Sweden)

    Yogang Singh

    2017-06-01

    Full Text Available Underwater gliders are buoyancy propelled vehicle which make use of buoyancy for vertical movement and wings to propel the glider in forward direction. Autonomous underwater gliders are a patented technology and are manufactured and marketed by corporations. In this study, we validate the experimental lift and drag characteristics of a glider from the literature using Computational fluid dynamics (CFD approach. This approach is then used for the assessment of the steady state characteristics of a laboratory glider designed at Indian Institute of Technology (IIT Madras. Flow behaviour and lift and drag force distribution at different angles of attack are studied for Reynolds numbers varying from 105 to 106 for NACA0012 wing configurations. The state variables of the glider are the velocity, gliding angle and angle of attack which are simulated by making use of the hydrodynamic drag and lift coefficients obtained from CFD. The effect of the variable buoyancy is examined in terms of the gliding angle, velocity and angle of attack. Laboratory model of glider is developed from the final design asserted by CFD. This model is used for determination of static and dynamic properties of an underwater glider which were validated against an equivalent CAD model and simulation results obtained from equations of motion of glider in vertical plane respectively. In the literature, only empirical approach has been adopted to estimate the hydrodynamic coefficients of the AUG that are required for its trajectory simulation. In this work, a CFD approach has been proposed to estimate the hydrodynamic coefficients and validated with experimental data. A two-mass variable buoyancy engine has been designed and implemented. The equations of motion for this two-mass engine have been obtained by modifying the single mass version of the equations described in the literature. The objectives of the present study are to understand the glider dynamics adopting a CFD approach

  4. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  5. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  6. Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

    Directory of Open Access Journals (Sweden)

    A. Riccio

    2007-12-01

    Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.

    We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.

    This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  7. An overview of CFD modelling of small-scale fixed-bed biomass pellet boilers with preliminary results from a simplified approach

    International Nuclear Information System (INIS)

    Chaney, Joel; Liu Hao; Li Jinxing

    2012-01-01

    Highlights: ► Overview of the overall approach of modelling fixed-bed biomass boilers in CFD. ► Bed sub-models of moisture evaporation, devolatisation and char combustion reviewed. ► A method of embedding a combustion model in discrete fuel zones within the CFD is suggested. ► Includes sample of preliminary results for a 50 kW pellet boiler. ► Clear physical trends predicted. - Abstract: The increasing global energy demand and mounting pressures for CO 2 mitigation call for increased efficient utilization of biomass, particularly for heating domestic and commercial buildings. The authors of the present paper are investigating the optimization of the combustion performance and NO x emissions of a 50 kW biomass pellet boiler fabricated by a UK manufacturer. The boiler has a number of adjustable parameters including the ratio of air flow split between the primary and secondary supplies, the orientation, height, direction and number of the secondary inlets. The optimization of these parameters provides opportunities to improve both the combustion efficiency and NO x emissions. When used carefully in conjunction with experiments, Computational Fluid Dynamics (CFD) modelling is a useful tool for rapidly and at minimum cost examining the combustion performance and emissions from a boiler with multiple variable parameters. However, modelling combustion and emissions of a small-scale biomass pellet boiler is not trivial and appropriate fixed-bed models that can be coupled with the CFD code are required. This paper reviews previous approaches specifically relevant to simulating fixed-bed biomass boilers. In the first part it considers approaches to modelling the heterogeneous solid phase and coupling this with the gas phase. The essential components of the sub-models are then overviewed. Importantly, for the optimization process a model is required that has a good balance between accuracy in predicting physical trends, with low computational run time. Finally, a

  8. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Xu, X.P. [Candu Energy Inc, Mississauga, Ontario (Canada)

    2012-07-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  9. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    International Nuclear Information System (INIS)

    Xu, X.P.

    2012-01-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  10. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  11. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  12. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  13. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  14. Hand-assisted Approach as a Model to Teach Complex Laparoscopic Hepatectomies: Preliminary Results.

    Science.gov (United States)

    Makdissi, Fabio F; Jeismann, Vagner B; Kruger, Jaime A P; Coelho, Fabricio F; Ribeiro-Junior, Ulysses; Cecconello, Ivan; Herman, Paulo

    2017-08-01

    Currently, there are limited and scarce models to teach complex liver resections by laparoscopy. The aim of this study is to present a hand-assisted technique to teach complex laparoscopic hepatectomies for fellows in liver surgery. Laparoscopic hand-assisted approach for resections of liver lesions located in posterosuperior segments (7, 6/7, 7/8, 8) was performed by the trainees with guidance and intermittent intervention of a senior surgeon. Data as: (1) percentage of time that the senior surgeon takes the surgery as main surgeon, (2) need for the senior surgeon to finish the procedure, (3) necessity of conversion, (4) bleeding with hemodynamic instability, (5) need for transfusion, (6) oncological surgical margins, were evaluated. In total, 12 cases of complex laparoscopic liver resections were performed by the trainee. All cases included deep lesions situated on liver segments 7 or 8. The senior surgeon intervention occurred in a mean of 20% of the total surgical time (range, 0% to 50%). A senior intervention >20% was necessary in 2 cases. There was no need for conversion or reoperation. Neither major bleeding nor complications resulted from the teaching program. All surgical margins were clear. This preliminary report shows that hand-assistance is a safe way to teach complex liver resections without compromising patient safety or oncological results. More cases are still necessary to draw definitive conclusions about this teaching method.

  15. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  16. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  17. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  18. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  19. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  20. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  1. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  2. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  3. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  4. Numeric, Agent-based or System dynamics model? Which modeling approach is the best for vast population simulation?

    Science.gov (United States)

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-02-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25 % of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  6. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  7. A new approach to Naturalness in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.

  8. High dimensions - a new approach to fermionic lattice models

    International Nuclear Information System (INIS)

    Vollhardt, D.

    1991-01-01

    The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)

  9. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  10. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  11. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  12. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  13. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  14. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  15. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    Science.gov (United States)

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  16. Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.

    Science.gov (United States)

    Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z

    2018-01-01

    Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.

  17. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  18. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  19. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  20. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  1. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  2. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  3. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  4. Dynamics and control of quadcopter using linear model predictive control approach

    Science.gov (United States)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  5. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  6. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  7. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  8. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  9. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  10. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  11. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  12. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  13. Results of a Flipped Classroom Teaching Approach in Anesthesiology Residents.

    Science.gov (United States)

    Martinelli, Susan M; Chen, Fei; DiLorenzo, Amy N; Mayer, David C; Fairbanks, Stacy; Moran, Kenneth; Ku, Cindy; Mitchell, John D; Bowe, Edwin A; Royal, Kenneth D; Hendrickse, Adrian; VanDyke, Kenneth; Trawicki, Michael C; Rankin, Demicha; Guldan, George J; Hand, Will; Gallagher, Christopher; Jacob, Zvi; Zvara, David A; McEvoy, Matthew D; Schell, Randall M

    2017-08-01

    In a flipped classroom approach, learners view educational content prior to class and engage in active learning during didactic sessions. We hypothesized that a flipped classroom improves knowledge acquisition and retention for residents compared to traditional lecture, and that residents prefer this approach. We completed 2 iterations of a study in 2014 and 2015. Institutions were assigned to either flipped classroom or traditional lecture for 4 weekly sessions. The flipped classroom consisted of reviewing a 15-minute video, followed by 45-minute in-class interactive sessions with audience response questions, think-pair-share questions, and case discussions. The traditional lecture approach consisted of a 55-minute lecture given by faculty with 5 minutes for questions. Residents completed 3 knowledge tests (pretest, posttest, and 4-month retention) and surveys of their perceptions of the didactic sessions. A linear mixed model was used to compare the effect of both formats on knowledge acquisition and retention. Of 182 eligible postgraduate year 2 anesthesiology residents, 155 (85%) participated in the entire intervention, and 142 (78%) completed all tests. The flipped classroom approach improved knowledge retention after 4 months (adjusted mean = 6%; P  = .014; d  = 0.56), and residents preferred the flipped classroom (pre = 46%; post = 82%; P  flipped classroom approach to didactic education resulted in a small improvement in knowledge retention and was preferred by anesthesiology residents.

  14. Agent-based modeling: a new approach for theory building in social psychology.

    Science.gov (United States)

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  15. Melt coolability modeling and comparison to MACE test results

    International Nuclear Information System (INIS)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1992-01-01

    An important question in the assessment of severe accidents in light water nuclear reactors is the ability of water to quench a molten corium-concrete interaction and thereby terminate the accident progression. As part of the Melt Attack and Coolability Experiment (MACE) Program, phenomenological models of the corium quenching process are under development. The modeling approach considers both bulk cooldown and crust-limited heat transfer regimes, as well as criteria for the pool thermal hydraulic conditions which separate the two regimes. The model is then compared with results of the MACE experiments

  16. Lattice Hamiltonian approach to the Schwinger model. Further results from the strong coupling expansion

    International Nuclear Information System (INIS)

    Szyniszewski, Marcin; Manchester Univ.; Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka

    2014-10-01

    We employ exact diagonalization with strong coupling expansion to the massless and massive Schwinger model. New results are presented for the ground state energy and scalar mass gap in the massless model, which improve the precision to nearly 10 -9 %. We also investigate the chiral condensate and compare our calculations to previous results available in the literature. Oscillations of the chiral condensate which are present while increasing the expansion order are also studied and are shown to be directly linked to the presence of flux loops in the system.

  17. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  18. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  19. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  1. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  2. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  3. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  4. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    International Nuclear Information System (INIS)

    Klos, Richard

    2008-03-01

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  5. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  6. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  7. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  8. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  9. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  10. Model unspecific search in CMS. Results at 8 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In the year 2012, CMS collected a total data set of approximately 20 fb{sup -1} in proton-proton collisions at √(s)=8 TeV. Dedicated searches for physics beyond the standard model are commonly designed with the signatures of a given theoretical model in mind. While this approach allows for an optimised sensitivity to the sought-after signal, it may cause unexpected phenomena to be overlooked. In a complementary approach, the Model Unspecific Search in CMS (MUSiC) analyses CMS data in a general way. Depending on the reconstructed final state objects (e.g. electrons), collision events are sorted into classes. In each of the classes, the distributions of selected kinematic variables are compared to standard model simulation. An automated statistical analysis is performed to quantify the agreement between data and prediction. In this talk, the analysis concept is introduced and selected results of the analysis of the 2012 CMS data set are presented.

  11. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  12. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  13. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  14. Approaches for Reduced Order Modeling of Electrically Actuated von Karman Microplates

    KAUST Repository

    Saghir, Shahid

    2016-07-25

    This article presents and compares different approaches to develop reduced order models for the nonlinear von Karman rectangular microplates actuated by nonlinear electrostatic forces. The reduced-order models aim to investigate the static and dynamic behavior of the plate under small and large actuation forces. A fully clamped microplate is considered. Different types of basis functions are used in conjunction with the Galerkin method to discretize the governing equations. First we investigate the convergence with the number of modes retained in the model. Then for validation purpose, a comparison of the static results is made with the results calculated by a nonlinear finite element model. The linear eigenvalue problem for the plate under the electrostatic force is solved for a wide range of voltages up to pull-in. Results among the various reduced-order modes are compared and are also validated by comparing to results of the finite-element model. Further, the reduced order models are employed to capture the forced dynamic response of the microplate under small and large vibration amplitudes. Comparison of the different approaches are made for this case. Keywords: electrically actuated microplates, static analysis, dynamics of microplates, diaphragm vibration, large amplitude vibrations, nonlinear dynamics

  15. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  16. Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario

    Science.gov (United States)

    Tobias, Guillermo; Jesús García, Adrián

    2016-04-01

    The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of

  17. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  18. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  19. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  20. FEM-based neural-network approach to nonlinear modeling with application to longitudinal vehicle dynamics control.

    Science.gov (United States)

    Kalkkuhl, J; Hunt, K J; Fritz, H

    1999-01-01

    An finite-element methods (FEM)-based neural-network approach to Nonlinear AutoRegressive with eXogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented.

  1. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  2. Behavioral facilitation: a cognitive model of individual differences in approach motivation.

    Science.gov (United States)

    Robinson, Michael D; Meier, Brian P; Tamir, Maya; Wilkowski, Benjamin M; Ode, Scott

    2009-02-01

    Approach motivation consists of the active, engaged pursuit of one's goals. The purpose of the present three studies (N = 258) was to examine whether approach motivation could be cognitively modeled, thereby providing process-based insights into personality functioning. Behavioral facilitation was assessed in terms of faster (or facilitated) reaction time with practice. As hypothesized, such tendencies predicted higher levels of approach motivation, higher levels of positive affect, and lower levels of depressive symptoms and did so across cognitive, behavioral, self-reported, and peer-reported outcomes. Tendencies toward behavioral facilitation, on the other hand, did not correlate with self-reported traits (Study 1) and did not predict avoidance motivation or negative affect (all studies). The results indicate a systematic relationship between behavioral facilitation in cognitive tasks and approach motivation in daily life. Results are discussed in terms of the benefits of modeling the cognitive processes hypothesized to underlie individual differences motivation, affect, and depression. (c) 2009 APA, all rights reserved

  3. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....

  4. An approach to ductile fracture resistance modelling in pipeline steels

    Energy Technology Data Exchange (ETDEWEB)

    Pussegoda, L.N.; Fredj, A. [BMT Fleet Technology Ltd., Kanata (Canada)

    2009-07-01

    Ductile fracture resistance studies of high grade steels in the pipeline industry often included analyses of the crack tip opening angle (CTOA) parameter using 3-point bend steel specimens. The CTOA is a function of specimen ligament size in high grade materials. Other resistance measurements may include steady state fracture propagation energy, critical fracture strain, and the adoption of damage mechanisms. Modelling approaches for crack propagation were discussed in this abstract. Tension tests were used to calibrate damage model parameters. Results from the tests were then applied to the crack propagation in a 3-point bend specimen using modern 1980 vintage steels. Limitations and approaches to overcome the difficulties associated with crack propagation modelling were discussed.

  5. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    International Nuclear Information System (INIS)

    Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.

    2011-01-01

    The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  6. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  7. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    Science.gov (United States)

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  8. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  9. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  10. A Probabilistic Model for Diagnosing Misconceptions by a Pattern Classification Approach.

    Science.gov (United States)

    Tatsuoka, Kikumi K.

    A probabilistic approach is introduced to classify and diagnose erroneous rules of operation resulting from a variety of misconceptions ("bugs") in a procedural domain of arithmetic. The model is contrasted with the deterministic approach which has commonly been used in the field of artificial intelligence, and the advantage of treating the…

  11. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  12. Numerical modeling of hydrodynamics and sediment transport—an integrated approach

    Science.gov (United States)

    Gic-Grusza, Gabriela; Dudkowska, Aleksandra

    2017-10-01

    Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind-wave fields, whereas wave-induced currents were calculated using the Kołodko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.

  13. Circulation in the Gulf of Trieste: measurements and model results

    International Nuclear Information System (INIS)

    Bogunovici, B.; Malacic, V.

    2008-01-01

    The study presents seasonal variability of currents in the southern part of the Gulf of Trieste. A time series analysis of currents and wind stress for the period 2003-2006, which were measured by the coastal oceanographic buoy, was conducted. A comparison between these data and results obtained from a numerical model of circulation in the Gulf was performed to validate model results. Three different approaches were applied to the wind data to determine the wind stress. Similarities were found between Kondo and Smith approaches while the method of Vera shows differences which were particularly noticeable for lower (= 1 m/s) and higher wind speeds (= 15 m/s). Mean currents in the surface layer are generally outflow currents from the Gulf due to wind forcing (bora). However in all other depth layers inflow currents are dominant. With the principal component analysis (Pca) major and minor axes were determined for all seasons. The major axis of maximum variance in years between 2003 and 2006 is prevailing in Ne-Sw direction, which is parallel to the coastline. Comparison of observation and model results is showing that currents are similar (in direction) for the surface and bottom layers but are significantly different for the middle layer (5-13 m). At a depth between 14-21 m velocities are comparable in direction as well as in magnitude even though model values are higher. Higher values of modelled currents at the surface and near the bottom are explained by higher values of wind stress that were used in the model as driving input with respect to the stress calculated from the measured winds. Larger values of modelled currents near the bottom are related to the larger inflow that needs to compensate for the larger modelled outflow at the surface. However, inspection of the vertical structure of temperature, salinity and density shows that the model is reproducing a weaker density gradient which enables the penetration of the outflow surface currents to larger depths.

  14. An analytical model for nanoparticles concentration resulting from infusion into poroelastic brain tissue.

    Science.gov (United States)

    Pizzichelli, G; Di Michele, F; Sinibaldi, E

    2016-02-01

    We consider the infusion of a diluted suspension of nanoparticles (NPs) into poroelastic brain tissue, in view of relevant biomedical applications such as intratumoral thermotherapy. Indeed, the high impact of the related pathologies motivates the development of advanced therapeutic approaches, whose design also benefits from theoretical models. This study provides an analytical expression for the time-dependent NPs concentration during the infusion into poroelastic brain tissue, which also accounts for particle binding onto cells (by recalling relevant results from the colloid filtration theory). Our model is computationally inexpensive and, compared to fully numerical approaches, permits to explicitly elucidate the role of the involved physical aspects (tissue poroelasticity, infusion parameters, NPs physico-chemical properties, NP-tissue interactions underlying binding). We also present illustrative results based on parameters taken from the literature, by considering clinically relevant ranges for the infusion parameters. Moreover, we thoroughly assess the model working assumptions besides discussing its limitations. While not laying any claims of generality, our model can be used to support the development of more ambitious numerical approaches, towards the preliminary design of novel therapies based on NPs infusion into brain tissue. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  16. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  17. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  18. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  19. Approaches to learning as predictors of academic achievement: Results from a large scale, multi-level analysis

    DEFF Research Database (Denmark)

    Herrmann, Kim Jesper; McCune, Velda; Bager-Elsborg, Anna

    2017-01-01

    The relationships between university students’ academic achievement and their approaches to learning and studying continuously attract scholarly attention. We report the results of an analysis in which multilevel linear modelling was used to analyse data from 3,626 Danish university students....... Controlling for the effects of age, gender, and progression, we found that the students’ end-of-semester grade point averages were related negatively to a surface approach and positively to organised effort. Interestingly, the effect of the surface approach on academic achievement varied across programmes....... While there has been considerable interest in the ways in which academic programmes shape learning and teaching, the effects of these contexts on the relationship between approaches to learning and academic outcomes is under-researched. The results are discussed in relation to findings from recent meta...

  20. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  1. Implementation of Reseptive Esteemy Approach Model in Learning Reading Literature

    Directory of Open Access Journals (Sweden)

    Titin Nurhayatin

    2017-03-01

    Full Text Available Research on the implementation of aesthetic model of receptive aesthetic approach in learning to read the literature on the background of the low quality of results and learning process of Indonesian language, especially the study of literature. Students as prospective teachers of Indonesian language are expected to have the ability to speak, have literature, and their learning in a balanced manner in accordance with the curriculum demands. This study examines the effectiveness, quality, acceptability, and sustainability of the aesthetic approach of receptions in improving students' literary skills. Based on these problems, this study is expected to produce a learning model that contributes high in improving the quality of results and the process of learning literature. This research was conducted on the students of Language Education Program, Indonesian Literature and Regional FKIP Pasundan University. The research method used is experiment with randomized type pretest-posttest control group design. Based on preliminary and final test data obtained in the experimental class the average preliminary test was 55.86 and the average final test was 76.75. From the preliminary test data in the control class the average score was 55.07 and the average final test was 68.76. These data suggest that there is a greater increase in grades in the experimental class using the aesthetic approach of the reception compared with the increase in values in the control class using a conventional approach. The results show that the aesthetic approach of receptions is more effective than the conventional approach in literary reading. Based on observations, acceptance, and views of sustainability, the aesthetic approach of receptions in literary learning is expected to be an alternative and solution in overcoming the problems of literary learning and improving the quality of Indonesian learning outcomes and learning process.

  2. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  3. Reduced modeling of signal transduction – a modular approach

    Directory of Open Access Journals (Sweden)

    Ederer Michael

    2007-09-01

    Full Text Available Abstract Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good

  4. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  5. Classical Michaelis-Menten and system theory approach to modeling metabolite formation kinetics.

    Science.gov (United States)

    Popović, Jovan

    2004-01-01

    When single doses of drug are administered and kinetics are linear, techniques, which are based on the compartment approach and the linear system theory approach, in modeling the formation of the metabolite from the parent drug are proposed. Unlike the purpose-specific compartment approach, the methodical, conceptual and computational uniformity in modeling various linear biomedical systems is the dominant characteristic of the linear system approach technology. Saturation of the metabolic reaction results in nonlinear kinetics according to the Michaelis-Menten equation. The two compartment open model with Michaelis-Menten elimination kinetics is theorethicaly basic when single doses of drug are administered. To simulate data or to fit real data using this model, one must resort to numerical integration. A biomathematical model for multiple dosage regimen calculations of nonlinear metabolic systems in steady-state and a working example with phenytoin are presented. High correlation between phenytoin steady-state serum levels calculated from individual Km and Vmax values in the 15 adult epileptic outpatients and the observed levels at the third adjustment of phenytoin daily dose (r=0.961, p<0.01) were found.

  6. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  7. Global Risk Evolution and Diversification: a Copula-DCC-GARCH Model Approach

    Directory of Open Access Journals (Sweden)

    Marcelo Brutti Righi

    2012-12-01

    Full Text Available In this paper we estimate a dynamic portfolio composed by the U.S., German, British, Brazilian, Hong Kong and Australian markets, the period considered started on September 2001 and finished in September 2011. We ran the Copula-DCC-GARCH model on the daily returns conditional covariance matrix. The results allow us to conclude that there were changes in portfolio composition, occasioned by modifications in volatility and dependence between markets. The dynamic approach significantly reduced the portfolio risk if compared to the traditional static approach, especially in turbulent periods. Furthermore, we verified that the estimated copula model outperformed the conventional DCC model for the sample studied.

  8. Policy harmonized approach for the EU agricultural sector modelling

    Directory of Open Access Journals (Sweden)

    G. SALPUTRA

    2008-12-01

    Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed – full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how “common” policies are implemented within analytical models. In the second section of the paper the AGMEMOD model’s structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the paper’s penultimate section, while section 5 concludes.;

  9. Scientific Approach and Inquiry Learning Model in the Topic of Buffer Solution: A Content Analysis

    Science.gov (United States)

    Kusumaningrum, I. A.; Ashadi, A.; Indriyanti, N. Y.

    2017-09-01

    Many concepts in buffer solution cause student’s misconception. Understanding science concepts should apply the scientific approach. One of learning models which is suitable with this approach is inquiry. Content analysis was used to determine textbook compatibility with scientific approach and inquiry learning model in the concept of buffer solution. By using scientific indicator tools (SIT) and Inquiry indicator tools (IIT), we analyzed three chemistry textbooks grade 11 of senior high school labeled as P, Q, and R. We described how textbook compatibility with scientific approach and inquiry learning model in the concept of buffer solution. The results show that textbook P and Q were very poor and book R was sufficient because the textbook still in procedural level. Chemistry textbooks used at school are needed to be improved in term of scientific approach and inquiry learning model. The result of these analyses might be of interest in order to write future potential textbooks.

  10. Variational approach to thermal masses in compactified models

    Energy Technology Data Exchange (ETDEWEB)

    Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)

    2015-08-20

    We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.

  11. Consequence Based Design. An approach for integrating computational collaborative models (Integrated Dynamic Models) in the building design phase

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    relies on various advancements in the area of integrated dynamic models. It also relies on the application and test of the approach in practice to evaluate the Consequence based design and the use of integrated dynamic models. As a result, the Consequence based design approach has been applied in five...... and define new ways to implement integrated dynamic models for the following project. In parallel, seven different developments of new methods, tools and algorithms have been performed to support the application of the approach. The developments concern: Decision diagrams – to clarify goals and the ability...... affect the design process and collaboration between building designers and simulationists. Within the limits of applying the approach of Consequence based design to five case studies, followed by documentation based on interviews, surveys and project related documentations derived from internal reports...

  12. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  13. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  14. Modeling the cometary environment using a fluid approach

    Science.gov (United States)

    Shou, Yinsi

    Comets are believed to have preserved the building material of the early solar system and to hold clues to the origin of life on Earth. Abundant remote observations of comets by telescopes and the in-situ measurements by a handful of space missions reveal that the cometary environments are complicated by various physical and chemical processes among the neutral gases and dust grains released from comets, cometary ions, and the solar wind in the interplanetary space. Therefore, physics-based numerical models are in demand to interpret the observational data and to deepen our understanding of the cometary environment. In this thesis, three models using a fluid approach, which include important physical and chemical processes underlying the cometary environment, have been developed to study the plasma, neutral gas, and the dust grains, respectively. Although models based on the fluid approach have limitations in capturing all of the correct physics for certain applications, especially for very low gas density environment, they are computationally much more efficient than alternatives. In the simulations of comet 67P/Churyumov-Gerasimenko at various heliocentric distances with a wide range of production rates, our multi-fluid cometary neutral gas model and multi-fluid cometary dust model have achieved comparable results to the Direct Simulation Monte Carlo (DSMC) model, which is based on a kinetic approach that is valid in all collisional regimes. Therefore, our model is a powerful alternative to the particle-based model, especially for some computationally intensive simulations. Capable of accounting for the varying heating efficiency under various physical conditions in a self-consistent way, the multi-fluid cometary neutral gas model is a good tool to study the dynamics of the cometary coma with different production rates and heliocentric distances. The modeled H2O expansion speeds reproduce the general trend and the speed's nonlinear dependencies of production rate

  15. Modeling the Relations among Students' Epistemological Beliefs, Motivation, Learning Approach, and Achievement

    Science.gov (United States)

    Kizilgunes, Berna; Tekkaya, Ceren; Sungur, Semra

    2009-01-01

    The authors proposed a model to explain how epistemological beliefs, achievement motivation, and learning approach related to achievement. The authors assumed that epistemological beliefs influence achievement indirectly through their effect on achievement motivation and learning approach. Participants were 1,041 6th-grade students. Results of the…

  16. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  17. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  18. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  19. A novel approach for runoff modelling in ungauged catchments by Catchment Morphing

    Science.gov (United States)

    Zhang, J.; Han, D.

    2017-12-01

    Runoff prediction in ungauged catchments has been one of the major challenges in the past decades. However, due to the tremendous heterogeneity of hydrological catchments, obstacles exist in deducing model parameters for ungauged catchments from gauged ones. We propose a novel approach to predict ungauged runoff with Catchment Morphing (CM) using a fully distributed model. CM is defined as by changing the catchment characteristics (area and slope here) from the baseline model built with a gauged catchment to model the ungauged ones. The advantages of CM are: (a) less demand of the similarity between the baseline catchment and the ungauged catchment, (b) less demand of available data, and (c) potentially applicable in varied catchments. A case study on seven catchments in the UK has been used to demonstrate the proposed scheme. To comprehensively examine the CM approach, distributed rainfall inputs are utilised in the model, and fractal landscapes are used to morph the land surface from the baseline model to the target model. The preliminary results demonstrate the feasibility of the approach, which is promising in runoff simulation for ungauged catchments. Clearly, more work beyond this pilot study is needed to explore and develop this new approach further to maturity by the hydrological community.

  20. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  1. The elastic body model: a pedagogical approach integrating real time measurements and modelling activities

    International Nuclear Information System (INIS)

    Fazio, C; Guastella, I; Tarantino, G

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed

  2. Multivariate determinants of self-management in Health Care: assessing Health Empowerment Model by comparison between structural equation and graphical models approaches

    Directory of Open Access Journals (Sweden)

    Filippo Trentini

    2015-03-01

    Full Text Available Backgroung. In public health one debated issue is related to consequences of improper self-management in health care.  Some theoretical models have been proposed in Health Communication theory which highlight how components such general literacy and specific knowledge of the disease might be very important for effective actions in healthcare system.  Methods. This  paper aims at investigating the consistency of Health Empowerment Model by means of both graphical models approach, which is a “data driven” method and a Structural Equation Modeling (SEM approach, which is instead “theory driven”, showing the different information pattern that can be revealed in a health care research context.The analyzed dataset provides data on the relationship between the Health Empowerment Model constructs and the behavioral and health status in 263 chronic low back pain (cLBP patients. We used the graphical models approach to evaluate the dependence structure in a “blind” way, thus learning the structure from the data.Results. From the estimation results dependence structure confirms links design assumed in SEM approach directly from researchers, thus validating the hypotheses which generated the Health Empowerment Model constructs.Conclusions. This models comparison helps in avoiding confirmation bias. In Structural Equation Modeling, we used SPSS AMOS 21 software. Graphical modeling algorithms were implemented in a R software environment.

  3. MODELING OF INNOVATION EDUCATIONAL ENVIRONMENT OF GENERAL EDUCATIONAL INSTITUTION: THE SCIENTIFIC APPROACHES

    OpenAIRE

    Anzhelika D. Tsymbalaru

    2010-01-01

    In the paper the scientific approaches to modeling of innovation educational environment of a general educational institution – system (analysis of object, process and result of modeling as system objects), activity (organizational and psychological structure) and synergetic (aspects and principles).

  4. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  5. A practical approach to model checking Duration Calculus using Presburger Arithmetic

    DEFF Research Database (Denmark)

    Hansen, Michael Reichhardt; Dung, Phan Anh; Brekling, Aske Wiid

    2014-01-01

    This paper investigates the feasibility of reducing a model-checking problem K ⊧ ϕ for discrete time Duration Calculus to the decision problem for Presburger Arithmetic. Theoretical results point at severe limitations of this approach: (1) the reduction in Fränzle and Hansen (Int J Softw Inform 3...... limits of the approach are illustrated by a family of examples....

  6. Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

    KAUST Repository

    Dreano, Denis

    2017-05-31

    Phytoplankton is at the basis of the marine food chain and therefore play a fundamental role in the ocean ecosystem. However, the large-scale phytoplankton dynamics of the Red Sea are not well understood yet, mainly due to the lack of historical in situ measurements. As a result, our knowledge in this area relies mostly on remotely-sensed observations and large-scale numerical marine ecosystem models. Models are very useful to identify the mechanisms driving the variations in chlorophyll concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based on a set of differential equations representing the transfer of energy and matter between different subsets of the biota, whereas statistical models identify relationships between variables based on statistical relations within the available data. The goal of this thesis is to develop, implement and test novel dynamical and statistical modelling approaches for studying and forecasting the variability of chlorophyll concentration in the Red Sea. These new models are evaluated in term of their ability to efficiently forecast and explain the regional chlorophyll variability. We also propose innovative synergistic strategies to combine data- and physics-driven approaches to further enhance chlorophyll forecasting capabilities and efficiency.

  7. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  8. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  9. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.

    2017-01-01

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  10. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  11. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  12. Non-extensitivity vs. informative moments for financial models —A unifying framework and empirical results

    Science.gov (United States)

    Herrmann, K.

    2009-11-01

    Information-theoretic approaches still play a minor role in financial market analysis. Nonetheless, there have been two very similar approaches evolving during the last years, one in the so-called econophysics and the other in econometrics. Both generalize the notion of GARCH processes in an information-theoretic sense and are able to capture kurtosis better than traditional models. In this article we present both approaches in a more general framework. The latter allows the derivation of a wide range of new models. We choose a third model using an entropy measure suggested by Kapur. In an application to financial market data, we find that all considered models - with similar flexibility in terms of skewness and kurtosis - lead to very similar results.

  13. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  14. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  15. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  16. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  17. Evaluation of various modelling approaches in flood routing simulation and flood area mapping

    Science.gov (United States)

    Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe

    2016-04-01

    An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.

  18. Modelling an industrial anaerobic granular reactor using a multi-scale approach.

    Science.gov (United States)

    Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V

    2017-12-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All

  19. Risk assessment of oil price from static and dynamic modelling approaches

    DEFF Research Database (Denmark)

    Mi, Zhi-Fu; Wei, Yi-Ming; Tang, Bao-Jun

    2017-01-01

    ) and GARCH model on the basis of generalized error distribution (GED). The results show that EVT is a powerful approach to capture the risk in the oil markets. On the contrary, the traditional variance–covariance (VC) and Monte Carlo (MC) approaches tend to overestimate risk when the confidence level is 95......%, but underestimate risk at the confidence level of 99%. The VaR of WTI returns is larger than that of Brent returns at identical confidence levels. Moreover, the GED-GARCH model can estimate the downside dynamic VaR accurately for WTI and Brent oil returns....

  20. A fuzzy compromise programming approach for the Black-Litterman portfolio selection model

    Directory of Open Access Journals (Sweden)

    Mohsen Gharakhani

    2013-01-01

    Full Text Available In this paper, we examine advanced optimization approach for portfolio problem introduced by Black and Litterman to consider the shortcomings of Markowitz standard Mean-Variance optimization. Black and Litterman propose a new approach to estimate asset return. They present a way to incorporate the investor’s views into asset pricing process. Since the investor’s view about future asset return is always subjective and imprecise, we can represent it by using fuzzy numbers and the resulting model is multi-objective linear programming. Therefore, the proposed model is analyzed through fuzzy compromise programming approach using appropriate membership function. For this purpose, we introduce the fuzzy ideal solution concept based on investor preference and indifference relationships using canonical representation of proposed fuzzy numbers by means of their correspondingα-cuts. A real world numerical example is presented in which MSCI (Morgan Stanley Capital International Index is chosen as the target index. The results are reported for a portfolio consisting of the six national indices. The performance of the proposed models is compared using several financial criteria.

  1. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  3. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    Science.gov (United States)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  4. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. Modeling individual differences in randomized experiments using growth models: Recommendations for design, statistical analysis and reporting of results of internet interventions

    Directory of Open Access Journals (Sweden)

    Hugo Hesser

    2015-05-01

    Full Text Available Growth models (also known as linear mixed effects models, multilevel models, and random coefficients models have the capability of studying change at the group as well as the individual level. In addition, these methods have documented advantages over traditional data analytic approaches in the analysis of repeated-measures data. These advantages include, but are not limited to, the ability to incorporate time-varying predictors, handle dependence among repeated observations in a very flexible manner, and to provide accurate estimates with missing data under fairly unrestrictive missing data assumptions. The flexibility of the growth curve modeling approach to the analysis of change makes it the preferred choice in the evaluation of direct, indirect and moderated intervention effects. Although offering many benefits, growth models present challenges in terms of design, analysis and reporting of results. This paper provides a nontechnical overview of growth models in the analysis of change in randomized experiments and advocates for their use in the field of internet interventions. Practical recommendations for design, analysis and reporting of results from growth models are provided.

  6. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  7. River Export of Plastic from Land to Sea: A Global Modeling Approach

    Science.gov (United States)

    Siegfried, Max; Gabbert, Silke; Koelmans, Albert A.; Kroeze, Carolien; Löhr, Ansje; Verburg, Charlotte

    2016-04-01

    Plastic is increasingly considered a serious cause of water pollution. It is a threat to aquatic ecosystems, including rivers, coastal waters and oceans. Rivers transport considerable amounts of plastic from land to sea. The quantity and its main sources, however, are not well known. Assessing the amount of macro- and microplastic transport from river to sea is, therefore, important for understanding the dimension and the patterns of plastic pollution of aquatic ecosystems. In addition, it is crucial for assessing short- and long-term impacts caused by plastic pollution. Here we present a global modelling approach to quantify river export of plastic from land to sea. Our approach accounts for different types of plastic, including both macro- and micro-plastics. Moreover, we distinguish point sources and diffuse sources of plastic in rivers. Our modelling approach is inspired by global nutrient models, which include more than 6000 river basins. In this paper, we will present our modelling approach, as well as first model results for micro-plastic pollution in European rivers. Important sources of micro-plastics include personal care products, laundry, household dust and car tyre wear. We combine information on these sources with information on sewage management, and plastic retention during river transport for the largest European rivers. Our modelling approach may help to better understand and prevent water pollution by plastic , and at the same time serves as 'proof of concept' for future application on global scale.

  8. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  9. Metric-based approach and tool for modeling the I and C system using Markov chains

    International Nuclear Information System (INIS)

    Butenko, Valentyna; Kharchenko, Vyacheslav; Odarushchenko, Elena; Butenko, Dmitriy

    2015-01-01

    Markov's chains (MC) are well-know and widely applied in dependability and performability analysis of safety-critical systems, because of the flexible representation of system components dependencies and synchronization. There are few radblocks for greater application of the MC: accounting the additional system components increases the model state-space and complicates analysis; the non-numerically sophisticated user may find it difficult to decide between the variety of numerical methods to determine the most suitable and accurate for their application. Thus obtaining the high accurate and trusted modeling results becomes a nontrivial task. In this paper, we present the metric-based approach for selection of the applicable solution approach, based on the analysis of MCs stiffness, decomposability, sparsity and fragmentedness. Using this selection procedure the modeler can provide the verification of earlier obtained results. The presented approach was implemented in utility MSMC, which supports the MC construction, metric-based analysis, recommendations shaping and model solution. The model can be exported to the wall-known off-the-shelf mathematical packages for verification. The paper presents the case study of the industrial NPP I and C system, manufactured by RPC Radiy. The paper shows an application of metric-based approach and MSMC fool for dependability and safety analysis of RTS, and procedure of results verification. (author)

  10. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  11. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  12. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  13. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  14. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  15. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  16. Application of the SWAT model to an endorheic watershed in the Central Spanish Pre-Pyrenees: Methodological approach and preliminary results

    Science.gov (United States)

    Gaspar, Leticia; White, Sue; Navas, Ana; López-Vicente, Manuel; Palazón, Leticia

    2013-04-01

    Modelling runoff and sediment transport at watershed scale are key tools to predict hydrological and sediment processes, identify soil sediment sources and estimate sediment yield, with the purpose of better managing soil and water resources. This study aims to apply the SWAT model in an endorheic watershed in the Central Spanish Pre-Pyrenees, where there have been a number of previous field-based studies on sediment sources and transfers. The Soil and Water Assessment Tool (SWAT) is a process based semi-distributed watershed scale hydrologic model, which can provide a high level of spatial detail by allowing the watershed to be divided into sub-basins. This study addresses the challenge of applying the SWAT model to an endorheic watershed that drains to a central lake, without external output, and without a network of permanent rivers. In this case it has been shown that the SWAT model does not correctly reproduce the stream network when using automatic watershed delineation, even with a high resolution Digital Elevation Model (5 x 5 metres). For this purpose, different approaches needed to be considered, such as i) user-defined watersheds and streams, ii) burning in a stream network or iii) modelling each sub-watershed separately. The objective of this study was to develop a new methodological approach for correctly simulating the main hydrological processes in an endorheic and complex karst watershed of the Spanish Pre-Pyrenees. The Estanque de Arriba Lake watershed (74 ha) is an endorheic system located in the Spanish Central Pre-Pyrenees. This watershed holds a small and permanent lake of fresh water (1.7 ha) and is a Site of Community Importance (European NATURA 2000 network). The study area is characterized by an abrupt topography with altitude range between 679 and 862 m and an average slope gradient of 24 %. Steep slopes (> 24 %) occupy the northern part of the watershed, whereas gentle slopes (

  17. The experimental and shell model approach to 100Sn

    International Nuclear Information System (INIS)

    Grawe, H.; Maier, K.H.; Fitzgerald, J.B.; Heese, J.; Spohr, K.; Schubart, R.; Gorska, M.; Rejmund, M.

    1995-01-01

    The present status of experimental approach to 100 Sn and its shell model structure is given. New developments in experimental techniques, such as low background isomer spectroscopy and charged particle detection in 4π are surveyed. Based on recent experimental data shell model calculations are used to predict the structure of the single- and two-nucleon neighbours of 100 Sn. The results are compared to the systematic of Coulomb energies and spin-orbit splitting and discussed with respect to future experiments. (author). 51 refs, 11 figs, 1 tab

  18. Continuous dynamic assimilation of the inner region data in hydrodynamics modelling: optimization approach

    Directory of Open Access Journals (Sweden)

    F. I. Pisnitchenko

    2008-11-01

    Full Text Available In meteorological and oceanological studies the classical approach for finding the numerical solution of the regional model consists in formulating and solving a Cauchy-Dirichlet problem. The boundary conditions are obtained by linear interpolation of coarse-grid data provided by a global model. Errors in boundary conditions due to interpolation may cause large deviations from the correct regional solution. The methods developed to reduce these errors deal with continuous dynamic assimilation of known global data available inside the regional domain. One of the approaches of this assimilation procedure performs a nudging of large-scale components of regional model solution to large-scale global data components by introducing relaxation forcing terms into the regional model equations. As a result, the obtained solution is not a valid numerical solution to the original regional model. Another approach is the use a four-dimensional variational data assimilation procedure which is free from the above-mentioned shortcoming. In this work we formulate the joint problem of finding the regional model solution and data assimilation as a PDE-constrained optimization problem. Three simple model examples (ODE Burgers equation, Rossby-Oboukhov equation, Korteweg-de Vries equation are considered in this paper. Numerical experiments indicate that the optimization approach can significantly improve the precision of the regional solution.

  19. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  20. New Approaches in Reuseable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  1. New Approaches in Reusable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  2. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  3. Results from flamelet and non-flamelet models for supersonic combustion

    Science.gov (United States)

    Ladeinde, Foluso; Li, Wenhai

    2017-11-01

    Air-breathing propulsion systems (scramjets) have been identified as a viable alternative to rocket engines for improved efficiency. A scramjet engine, which operates at flight Mach numbers around 7 or above, is characterized by the existence of supersonic flow conditions in the combustor. In a dual-mode scramjet, this phenomenon is possible because of the relatively low value of the equivalence ratio and high stagnation temperature, which, together, inhibits thermal choking downstream of transverse injectors. The flamelet method has been our choice for turbulence-combustion interaction modeling and we have extended the basic approach in several dimensions, with a focus on the way the pressure and progress variable are modeled. Improved results have been obtained. We have also examined non-flamelet models, including laminar chemistry (QL), eddy dissipation concept (EDC), and partially-stirred reactor (PaSR). The pressure/progress variable-corrected simulations give better results compared with the original model, with reaction rates that are lower than those from EDC and PaSR. In general, QL tends to over-predict the reaction rate for the supersonic combustion problems investigated in our work.

  4. A modeling approach to predict acoustic nonlinear field generated by a transmitter with an aluminum lens.

    Science.gov (United States)

    Fan, Tingbo; Liu, Zhenbo; Chen, Tao; Li, Faqi; Zhang, Dong

    2011-09-01

    In this work, the authors propose a modeling approach to compute the nonlinear acoustic field generated by a flat piston transmitter with an attached aluminum lens. In this approach, the geometrical parameters (radius and focal length) of a virtual source are initially determined by Snell's refraction law and then adjusted based on the Rayleigh integral result in the linear case. Then, this virtual source is used with the nonlinear spheroidal beam equation (SBE) model to predict the nonlinear acoustic field in the focal region. To examine the validity of this approach, the calculated nonlinear result is compared with those from the Westervelt and (Khokhlov-Zabolotskaya-Kuznetsov) KZK equations for a focal intensity of 7 kW/cm(2). Results indicate that this approach could accurately describe the nonlinear acoustic field in the focal region with less computation time. The proposed modeling approach is shown to accurately describe the nonlinear acoustic field in the focal region. Compared with the Westervelt equation, the computation time of this approach is significantly reduced. It might also be applicable for the widely used concave focused transmitter with a large aperture angle.

  5. Analyzing Unsaturated Flow Patterns in Fractured Rock Using an Integrated Modeling Approach

    International Nuclear Information System (INIS)

    Y.S. Wu; G. Lu; K. Zhang; L. Pan; G.S. Bodvarsson

    2006-01-01

    Characterizing percolation patterns in unsaturated fractured rock has posed a greater challenge to modeling investigations than comparable saturated zone studies, because of the heterogeneous nature of unsaturated media and the great number of variables impacting unsaturated flow. This paper presents an integrated modeling methodology for quantitatively characterizing percolation patterns in the unsaturated zone of Yucca Mountain, Nevada, a proposed underground repository site for storing high-level radioactive waste. The modeling approach integrates a wide variety of moisture, pneumatic, thermal, and isotopic geochemical field data into a comprehensive three-dimensional numerical model for modeling analyses. It takes into account the coupled processes of fluid and heat flow and chemical isotopic transport in Yucca Mountain's highly heterogeneous, unsaturated fractured tuffs. Modeling results are examined against different types of field-measured data and then used to evaluate different hydrogeological conceptualizations and their results of flow patterns in the unsaturated zone. In particular, this model provides a much clearer understanding of percolation patterns and flow behavior through the unsaturated zone, both crucial issues in assessing repository performance. The integrated approach for quantifying Yucca Mountain's flow system is demonstrated to provide a practical modeling tool for characterizing flow and transport processes in complex subsurface systems

  6. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  7. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  8. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  9. Hybrid approach for the assessment of PSA models by means of binary decision diagrams

    International Nuclear Information System (INIS)

    Ibanez-Llano, Cristina; Rauzy, Antoine; Melendez, Enrique; Nieto, Francisco

    2010-01-01

    Binary decision diagrams are a well-known alternative to the minimal cutsets approach to assess the reliability Boolean models. They have been applied successfully to improve the fault trees models assessment. However, its application to solve large models, and in particular the event trees coming from the PSA studies of the nuclear industry, remains to date out of reach of an exact evaluation. For many real PSA models it may be not possible to compute the BDD within reasonable amount of time and memory without considering the truncation or simplification of the model. This paper presents a new approach to estimate the exact probabilistic quantification results (probability/frequency) based on combining the calculation of the MCS and the truncation limits, with the BDD approach, in order to have a better control on the reduction of the model and to properly account for the success branches. The added value of this methodology is that it is possible to ensure a real confidence interval of the exact value and therefore an explicit knowledge of the error bound. Moreover, it can be used to measure the acceptability of the results obtained with traditional techniques. The new method was applied to a real life PSA study and the results obtained confirm the applicability of the methodology and open a new viewpoint for further developments.

  10. Hybrid approach for the assessment of PSA models by means of binary decision diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez-Llano, Cristina, E-mail: cristina.ibanez@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain); Rauzy, Antoine, E-mail: Antoine.RAUZY@3ds.co [Dassault Systemes, 10 rue Marcel Dassault CS 40501, 78946 Velizy Villacoublay Cedex (France); Melendez, Enrique, E-mail: ema@csn.e [Consejo de Seguridad Nuclear (CSN), C/Justo Dorado 11, 28040 Madrid (Spain); Nieto, Francisco, E-mail: nieto@iit.upcomillas.e [Instituto de Investigacion Tecnologica (IIT), Escuela Tecnica Superior de Ingenieria ICAI, Universidad Pontificia Comillas, C/Santa Cruz de Marcenado 26, 28015 Madrid (Spain)

    2010-10-15

    Binary decision diagrams are a well-known alternative to the minimal cutsets approach to assess the reliability Boolean models. They have been applied successfully to improve the fault trees models assessment. However, its application to solve large models, and in particular the event trees coming from the PSA studies of the nuclear industry, remains to date out of reach of an exact evaluation. For many real PSA models it may be not possible to compute the BDD within reasonable amount of time and memory without considering the truncation or simplification of the model. This paper presents a new approach to estimate the exact probabilistic quantification results (probability/frequency) based on combining the calculation of the MCS and the truncation limits, with the BDD approach, in order to have a better control on the reduction of the model and to properly account for the success branches. The added value of this methodology is that it is possible to ensure a real confidence interval of the exact value and therefore an explicit knowledge of the error bound. Moreover, it can be used to measure the acceptability of the results obtained with traditional techniques. The new method was applied to a real life PSA study and the results obtained confirm the applicability of the methodology and open a new viewpoint for further developments.

  11. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...

  12. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  13. Modeling alcohol use disorder severity: an integrative structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Nathasha R Moallem

    2013-07-01

    Full Text Available Background: Alcohol dependence is a complex psychological disorder whose phenomenology changes as the disorder progresses. Neuroscience has provided a variety of theories and evidence for the development, maintenance, and severity of addiction; however, clinically, it has been difficult to evaluate alcohol use disorder (AUD severity. Objective: This study seeks to evaluate and validate a data-driven approach to capturing alcohol severity in a community sample. Method: Participants were non-treatment seeking problem drinkers (n = 283. A structural equation modeling (SEM approach was used to (a verify the latent factor structure of the indices of AUD severity; and (b test the relationship between the AUD severity factor and measures of alcohol use, affective symptoms, and motivation to change drinking. Results: The model was found to fit well, with all chosen indices of AUD severity loading significantly and positively onto the severity factor. In addition, the paths from the alcohol use, motivation, and affective factors accounted for 68% of the variance in AUD severity. Greater AUD severity was associated with greater alcohol use, increased affective symptoms, and higher motivation to change.Conclusions: Unlike the categorical diagnostic criteria, the AUD severity factor is comprised of multiple quantitative dimensions of impairment observed across the progression of the disorder. The AUD severity factor was validated by testing it in relation to other outcomes such as alcohol use, affective symptoms, and motivation for change. Clinically, this approach to AUD severity can be used to inform treatment planning and ultimately to improve outcomes.

  14. A Modeling Approach for Plastic-Metal Laser Direct Joining

    Science.gov (United States)

    Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca

    2017-09-01

    Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.

  15. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  16. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a

  17. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  18. Parameter uncertainty and model predictions: a review of Monte Carlo results

    International Nuclear Information System (INIS)

    Gardner, R.H.; O'Neill, R.V.

    1979-01-01

    Studies of parameter variability by Monte Carlo analysis are reviewed using repeated simulations of the model with randomly selected parameter values. At the beginning of each simulation, parameter values are chosen from specific frequency distributions. This process is continued for a number of iterations sufficient to converge on an estimate of the frequency distribution of the output variables. The purpose was to explore the general properties of error propagaton in models. Testing the implicit assumptions of analytical methods and pointing out counter-intuitive results produced by the Monte Carlo approach are additional points covered

  19. Modeling of annular two-phase flow using a unified CFD approach

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haipeng, E-mail: haipengl@kth.se; Anglart, Henryk, E-mail: henryk@kth.se

    2016-07-15

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  20. Modeling of annular two-phase flow using a unified CFD approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2016-01-01

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  1. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  2. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  3. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    Directory of Open Access Journals (Sweden)

    Achmad Arief Wicaksono

    2017-01-01

    Full Text Available The magnitude of opportunities and project values of electricity system in Indonesia encourages PT. XYZ to develop its business in electrical sector which requires business development strategies. This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development strategy which is appropriate to the manufacturing business model for PT. XYZ. This study utilized a descriptive approach and the nine elements of the Business Model Canvas. Alternative formulation and priority determination of the strategies were obtained by using Strengths, Weaknesses, Opportunities, Threats (SWOT analysis and pairwise comparison. The results of this study are the improvement of Business Model Canvas on the elements of key resources, key activities, key partners and customer segment. In terms of SWOT analysis on the nine elements of the Business Model Canvas for the first business development, the results show an expansion on the power plant construction project as the main contractor, an increase in sales in its core business in supporting equipment industry of oil and gas,  a development in the second business i.e. an investment in the electricity sector as an independent renewable emery-based power producer. On its first business development, PT. XYZ selected three Business Model Canvas elements which become the priorities of the company i.e. key resources weighing 0.252, key activities weighing 0.240, and key partners weighing 0.231. On its second business development, the company selected three elements to become their the priorities i.e. key partners weighing 0.225, customer segments weighing 0.217, and key resources weighing 0.215.Keywords: business model canvas, SWOT, pairwise comparison, business model

  4. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  5. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  6. Recent developments of the quantum chemical cluster approach for modeling enzyme reactions.

    Science.gov (United States)

    Siegbahn, Per E M; Himo, Fahmi

    2009-06-01

    The quantum chemical cluster approach for modeling enzyme reactions is reviewed. Recent applications have used cluster models much larger than before which have given new modeling insights. One important and rather surprising feature is the fast convergence with cluster size of the energetics of the reactions. Even for reactions with significant charge separation it has in some cases been possible to obtain full convergence in the sense that dielectric cavity effects from outside the cluster do not contribute to any significant extent. Direct comparisons between quantum mechanics (QM)-only and QM/molecular mechanics (MM) calculations for quite large clusters in a case where the results differ significantly have shown that care has to be taken when using the QM/MM approach where there is strong charge polarization. Insights from the methods used, generally hybrid density functional methods, have also led to possibilities to give reasonable error limits for the results. Examples are finally given from the most extensive study using the cluster model, the one of oxygen formation at the oxygen-evolving complex in photosystem II.

  7. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  8. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  9. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  10. Model predictive control approach for a CPAP-device

    Directory of Open Access Journals (Sweden)

    Scheel Mathias

    2017-09-01

    Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.

  11. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....

  12. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  13. Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap

    International Nuclear Information System (INIS)

    Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka; Szyniszewski, Marcin; Manchester Univ.

    2012-12-01

    We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10 -6 %.

  14. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  15. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  16. Axial turbomachine modelling with a 1D axisymmetric approach

    International Nuclear Information System (INIS)

    Tauveron, Nicolas; Saez, Manuel; Ferrand, Pascal; Leboeuf, Francis

    2007-01-01

    This work concerns the design and safety analysis of direct cycle gas cooled reactor. The estimation of compressor and turbine performances in transient operations is of high importance for the designer. The first goal of this study is to provide a description of compressor behaviour in unstable conditions with a better understanding than the models based on performance maps ('traditional' 0D approach). A supplementary objective is to provide a coherent description of the turbine behaviour. The turbomachine modelling approach consists in the solution of 1D axisymmetric Navier-Stokes equations on an axial grid inside the turbomachine: mass, axial momentum, circumferential momentum and total-enthalpy balances are written. Blade forces are taken into account by using compressor or turbine blade cascade steady correlations. A particular effort has been developed to generate or test correlations in low mass flow and negative mass flow regimes, based on experimental data. The model is tested on open literature cases of the gas turbine aircraft community. For compressor and turbine, steady situations are fairly described, especially for medium and high mass flow rate. The dynamic behaviour of compressor is also quite well described, even in unstable operation (surge): qualitative tendencies (role of plenum volume and role of throttle) and some quantitative characteristics (frequency) are in a good agreement with experimental data. The application to transient simulations of gas cooled nuclear reactors is concentrated on the hypothetical 10 in. break accident. The results point out the importance of the location of the pipe rupture in a hypothetical break event. In some detailed cases, compressor surge and back flow through the circuit can occur. In order to be used in a design phase, a simplified model of surge has also been developed. This simplified model is applied to the gas fast reactor (GFR) and compared quite favourably with 1D axisymmetric simulation results

  17. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  18. Electromagnetic forward modelling for realistic Earth models using unstructured tetrahedral meshes and a meshfree approach

    Science.gov (United States)

    Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.

    2017-12-01

    Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which

  19. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  20. [New approaches in pharmacology: numerical modelling and simulation].

    Science.gov (United States)

    Boissel, Jean-Pierre; Cucherat, Michel; Nony, Patrice; Dronne, Marie-Aimée; Kassaï, Behrouz; Chabaud, Sylvie

    2005-01-01

    The complexity of pathophysiological mechanisms is beyond the capabilities of traditional approaches. Many of the decision-making problems in public health, such as initiating mass screening, are complex. Progress in genomics and proteomics, and the resulting extraordinary increase in knowledge with regard to interactions between gene expression, the environment and behaviour, the customisation of risk factors and the need to combine therapies that individually have minimal though well documented efficacy, has led doctors to raise new questions: how to optimise choice and the application of therapeutic strategies at the individual rather than the group level, while taking into account all the available evidence? This is essentially a problem of complexity with dimensions similar to the previous ones: multiple parameters with nonlinear relationships between them, varying time scales that cannot be ignored etc. Numerical modelling and simulation (in silico investigations) have the potential to meet these challenges. Such approaches are considered in drug innovation and development. They require a multidisciplinary approach, and this will involve modification of the way research in pharmacology is conducted.

  1. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  2. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  3. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  4. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  5. Analysis student self efficacy in terms of using Discovery Learning model with SAVI approach

    Science.gov (United States)

    Sahara, Rifki; Mardiyana, S., Dewi Retno Sari

    2017-12-01

    Often students are unable to prove their academic achievement optimally according to their abilities. One reason is that they often feel unsure that they are capable of completing the tasks assigned to them. For students, such beliefs are necessary. The term belief has called self efficacy. Self efficacy is not something that has brought about by birth or something with permanent quality of an individual, but is the result of cognitive processes, the meaning one's self efficacy will be stimulated through learning activities. Self efficacy has developed and enhanced by a learning model that can stimulate students to foster confidence in their capabilities. One of them is by using Discovery Learning model with SAVI approach. Discovery Learning model with SAVI approach is one of learning models that involves the active participation of students in exploring and discovering their own knowledge and using it in problem solving by utilizing all the sensory devices they have. This naturalistic qualitative research aims to analyze student self efficacy in terms of use the Discovery Learning model with SAVI approach. The subjects of this study are 30 students focused on eight students who have high, medium, and low self efficacy obtained through purposive sampling technique. The data analysis of this research used three stages, that were reducing, displaying, and getting conclusion of the data. Based on the results of data analysis, it was concluded that the self efficacy appeared dominantly on the learning by using Discovery Learning model with SAVI approach is magnitude dimension.

  6. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    Energy Technology Data Exchange (ETDEWEB)

    Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu [Department of Physics, Clarkson University, Potsdam, New York 13676 (United States); Gorshkov, Vyacheslav [National Technical University of Ukraine — KPI, Kiev 03056 (Ukraine); Libert, Sergiy, E-mail: libert@cornell.edu [Department of Biomedical Sciences, Cornell University, Ithaca, New York 14853 (United States)

    2016-09-07

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  7. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  8. A Model-Driven Approach for 3D Modeling of Pylon from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Qingquan Li

    2015-09-01

    Full Text Available Reconstructing three-dimensional model of the pylon from LiDAR (Light Detection And Ranging point clouds automatically is one of the key techniques for facilities management GIS system of high-voltage nationwide transmission smart grid. This paper presents a model-driven three-dimensional pylon modeling (MD3DM method using airborne LiDAR data. We start with constructing a parametric model of pylon, based on its actual structure and the characteristics of point clouds data. In this model, a pylon is divided into three parts: pylon legs, pylon body and pylon head. The modeling approach mainly consists of four steps. Firstly, point clouds of individual pylon are detected and segmented from massive high-voltage transmission corridor point clouds automatically. Secondly, an individual pylon is divided into three relatively simple parts in order to reconstruct different parts with different strategies. Its position and direction are extracted by contour analysis of the pylon body in this stage. Thirdly, the geometric features of the pylon head are extracted, from which the head type is derived with a SVM (Support Vector Machine classifier. After that, the head is constructed by seeking corresponding model from pre-build model library. Finally, the body is modeled by fitting the point cloud to planes. Experiment results on several point clouds data sets from China Southern high-voltage nationwide transmission grid from Yunnan Province to Guangdong Province show that the proposed approach can achieve the goal of automatic three-dimensional modeling of the pylon effectively.

  9. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  10. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  11. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    Directory of Open Access Journals (Sweden)

    Z. Hashemiyan

    2016-01-01

    Full Text Available Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort.

  12. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    Science.gov (United States)

    Packo, P.; Staszewski, W. J.; Uhl, T.

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  13. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  14. Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap

    Energy Technology Data Exchange (ETDEWEB)

    Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Poznan Univ. (Poland). Faculty of Physics; Szyniszewski, Marcin [Poznan Univ. (Poland). Faculty of Physics; Manchester Univ. (United Kingdom). NOWNano DTC

    2012-12-15

    We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10{sup -6} %.

  15. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  16. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  17. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  18. Modelling approaches to the dewetting of evaporating thin films of nanoparticle suspensions

    International Nuclear Information System (INIS)

    Thiele, U; Vancea, I; Archer, A J; Robbins, M J; Frastia, L; Stannard, A; Pauliac-Vaujour, E; Martin, C P; Blunt, M O; Moriarty, P J

    2009-01-01

    We review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we describe (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model. Models (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front, which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed. Finally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and depinning of a contact line related to the 'coffee-stain' effect. In the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.

  19. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    Science.gov (United States)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  20. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  1. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  2. Variational approach for the N-state spin and gauge Potts model

    International Nuclear Information System (INIS)

    Masperi, L.; Omero, C.

    1981-05-01

    A hamiltonian variational treatment is applied both to the spin Potts model and to its gauge version for any number of states N and spatial dimensions d>=2. Regarding the former we reproduce correct critical coupling and latent heat for not too low N and d. For the latter, our approach turns the gauge theory into an equivalent d-dimensional classical spin model, which evaluated for d+1=4 gives results in agreement with 1/N expansions. (author)

  3. A systemic approach to modelling of radiobiological effects

    International Nuclear Information System (INIS)

    Obaturov, G.M.

    1988-01-01

    Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels

  4. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  5. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  6. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    of model-based regression operators in order to provide unbiased vertical profiles during the campaign at Munich Airport. The results of this algorithm and the retrievals of a neural network, specially developed for the site, are compared with radiosondes from Oberschleißheim located about 10 km apart from the MWRP site. Outstanding deviations for the lowest levels between 50 and 100 m are discussed. Analogously to the airport experiment, a model-based regression operator was calculated for Lindenberg and compared with both radiosondes and operational results of observation-based methods. The bias of the retrievals could be considerably reduced and the accuracy, which has been assessed for the airport site, is quite similar to those of the operational radiometer site at Lindenberg above 1 km height. Additional investigations are made to determine the length of the training period necessary for generating best estimates. Thereby three months have proven to be adequate. The results of the study show that on the basis of numerical weather prediction (NWP model data, available everywhere at any time, the model-based regression method is capable of providing comparable results at a multitude of sites. Furthermore, the approach offers auspicious conditions for automation and continuous updating.

  7. An integrated approach to model strain localization bands in magnesium alloys

    Science.gov (United States)

    Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.

    2018-02-01

    Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.

  8. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  9. Model-centric approaches for the development of health information systems.

    Science.gov (United States)

    Tuomainen, Mika; Mykkänen, Juha; Luostarinen, Heli; Pöyhölä, Assi; Paakkanen, Esa

    2007-01-01

    Modeling is used increasingly in healthcare to increase shared knowledge, to improve the processes, and to document the requirements of the solutions related to health information systems (HIS). There are numerous modeling approaches which aim to support these aims, but a careful assessment of their strengths, weaknesses and deficiencies is needed. In this paper, we compare three model-centric approaches in the context of HIS development: the Model-Driven Architecture, Business Process Modeling with BPMN and BPEL and the HL7 Development Framework. The comparison reveals that all these approaches are viable candidates for the development of HIS. However, they have distinct strengths and abstraction levels, they require local and project-specific adaptation and offer varying levels of automation. In addition, illustration of the solutions to the end users must be improved.

  10. Endogenous innovation, the economy and the environment : impacts of a technology-based modelling approach for energy-intensive industries in Germany

    International Nuclear Information System (INIS)

    Lutz, C.; Meyer, B.; Nathani, C.; Schleich, J.

    2007-01-01

    Policy simulations in environmental-economic models are influenced by the modelling of technological change. However, environmental-economic models have generally treated technological change as exogenous. This paper presented simulations with a new modelling approach in which technological change was portrayed and linked to actual production processes in 3 industry sectors in Germany, namely iron and steel; cement; and, pulp and paper. Technological choice was modelled via investments in new production process lines. The generic modelling procedure for all 3 industry sectors was presented along with an overview of the relevant sector-specific modelling results and the integration into the macro-economic model PANTA RHEI. The new modelling approach endogenizes technological change. As such, it considers that policy interventions may affect the rate and direction of technological progress. Carbon tax simulations were also performed to investigate the influence of a tax in both the new and conventional modelling approach. For the energy- and capital-intensive industries considered in this study, the conventional top-down approach overestimated the short-term possibilities to adapt to higher carbon dioxide (CO 2 ) prices in the early years. Since the new approach includes policy-induced technological change and process shifts, it also captures the long-term effects on CO 2 emissions far beyond the initial price impulse. It was concluded that the new modelling approach results in significantly higher emission reductions than the conventional approach. Therefore, the estimated costs of the climate policy are lower using the new modelling approach. 37 refs., 2 tabs., 4 figs., 1 appendix

  11. Implementing a stepped-care approach in primary care: results of a qualitative study

    Directory of Open Access Journals (Sweden)

    Franx Gerdien

    2012-01-01

    Full Text Available Abstract Background Since 2004, 'stepped-care models' have been adopted in several international evidence-based clinical guidelines to guide clinicians in the organisation of depression care. To enhance the adoption of this new treatment approach, a Quality Improvement Collaborative (QIC was initiated in the Netherlands. Methods Alongside the QIC, an intervention study using a controlled before-and-after design was performed. Part of the study was a process evaluation, utilizing semi-structured group interviews, to provide insight into the perceptions of the participating clinicians on the implementation of stepped care for depression into their daily routines. Participants were primary care clinicians, specialist clinicians, and other healthcare staff from eight regions in the Netherlands. Analysis was supported by the Normalisation Process Theory (NPT. Results The introduction of a stepped-care model for depression to primary care teams within the context of a depression QIC was generally well received by participating clinicians. All three elements of the proposed stepped-care model (patient differentiation, stepped-care treatment, and outcome monitoring, were translated and introduced locally. Clinicians reported changes in terms of learning how to differentiate between patient groups and different levels of care, changing antidepressant prescribing routines as a consequence of having a broader treatment package to offer to their patients, and better working relationships with patients and colleagues. A complex range of factors influenced the implementation process. Facilitating factors were the stepped-care model itself, the structured team meetings (part of the QIC method, and the positive reaction from patients to stepped care. The differing views of depression and depression care within multidisciplinary health teams, lack of resources, and poor information systems hindered the rapid introduction of the stepped-care model. The NPT

  12. Use of Monte Carlo modeling approach for evaluating risk and environmental compliance

    International Nuclear Information System (INIS)

    Higley, K.A.; Strenge, D.L.

    1988-09-01

    Evaluating compliance with environmental regulations, specifically those regulations that pertain to human exposure, can be a difficult task. Historically, maximum individual or worst-case exposures have been calculated as a basis for evaluating risk or compliance with such regulations. However, these calculations may significantly overestimate exposure and may not provide a clear understanding of the uncertainty in the analysis. The use of Monte Carlo modeling techniques can provide a better understanding of the potential range of exposures and the likelihood of high (worst-case) exposures. This paper compares the results of standard exposure estimation techniques with the Monte Carlo modeling approach. The authors discuss the potential application of this approach for demonstrating regulatory compliance, along with the strengths and weaknesses of the approach. Suggestions on implementing this method as a routine tool in exposure and risk analyses are also presented. 16 refs., 5 tabs

  13. Tracking LHC Models with Thick Lens Quadrupoles: Results and Comparisons with the Standard Thin Lens tracking.

    CERN Document Server

    Burkhardt, H; Risselada, T

    2012-01-01

    So far, the massive numerical simulation studies of the LHC dynamic aperture were performed using thin lens models of the machine. This approach has the clear advantage of speed, but it has also the disadvantage of requiring re-matching of the optics from the real thick configuration to the thin one. The figure-of-merit for the re-matching is the agreement between the beta-functions for the two models. However, the quadrupole gradients are left as free parameters, thus, the impact of the magnetic multipoles might be affected by this approach. In turns, the dynamic aperture computation could be changed. In this paper the new approach is described and the results for the dynamic aperture are compared with the old approach, including detailed considerations on the CPU-time requirements.

  14. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-01

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  15. Petroacoustic Modelling of Heterolithic Sandstone Reservoirs: A Novel Approach to Gassmann Modelling Incorporating Sedimentological Constraints and NMR Porosity data

    Science.gov (United States)

    Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.

    2012-12-01

    Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which

  16. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  17. Towards the best approach for wind wave modelling in the Red Sea

    KAUST Repository

    Langodan, Sabique

    2015-04-01

    While wind and wave modelling is nowadays quite satisfactory in the open oceans, problems are still present in the enclosed seas. In general, the smaller the basin, the poorer the models perform, especially if the basin is surrounded by a complex orography. The Red Sea is an extreme example in this respect, especially because of its long and narrow shape. This deceivingly simple domain offers very interesting challenges for wind and wave modeling, not easily, if ever, found elsewhere. Depending on the season, opposite wind regimes, one directed to southeast, the other one to northwest, are present and may coexist in the most northerly and southerly parts of the Red Sea. Where the two regimes meet, the wave spectra can be rather complicated and, crucially dependent on small details of the driving wind fields. We explored how well we could reproduce the general and unusual wind and wave patterns of the Red Sea using different meteorological products. Best results were obtained using two rather opposite approaches: the high-resolution Weather Research Forecasting (WRF) regional model and the slightly enhanced surface winds from the global European Centre for Medium-Range Weather Forecasts (ECMWF) model. We discuss the reasons why these two approaches produce the best results and the implications on wave modeling in the Red Sea. The unusual wind and wave patterns in the Red Sea suggest that the currently available wave model source functions may not properly represent the evolution of local fields. However, within limits, the WAVEWATCH III wave model, based on Janssen\\'s and also Ardhuin\\'s wave model physics, provides in many cases very reasonable results. Because surface winds lead to important uncertainties in wave simulation, we also discuss the impact of data assimilation for simulating the most accurate winds, and consequently waves, over the Red Sea.

  18. Time-dependent evolution of rock slopes by a multi-modelling approach

    Science.gov (United States)

    Bozzano, F.; Della Seta, M.; Martino, S.

    2016-06-01

    This paper presents a multi-modelling approach that incorporates contributions from morpho-evolutionary modelling, detailed engineering-geological modelling and time-dependent stress-strain numerical modelling to analyse the rheological evolution of a river valley slope over approximately 102 kyr. The slope is located in a transient, tectonically active landscape in southwestern Tyrrhenian Calabria (Italy), where gravitational processes drive failures in rock slopes. Constraints on the valley profile development were provided by a morpho-evolutionary model based on the correlation of marine and river strath terraces. Rock mass classes were identified through geomechanical parameters that were derived from engineering-geological surveys and outputs of a multi-sensor slope monitoring system. The rock mass classes were associated to lithotechnical units to obtain a high-resolution engineering-geological model along a cross section of the valley. Time-dependent stress-strain numerical modelling reproduced the main morpho-evolutionary stages of the valley slopes. The findings demonstrate that a complex combination of eustatism, uplift and Mass Rock Creep (MRC) deformations can lead to first-time failures of rock slopes when unstable conditions are encountered up to the generation of stress-controlled shear zones. The multi-modelling approach enabled us to determine that such complex combinations may have been sufficient for the first-time failure of the S. Giovanni slope at approximately 140 ka (MIS 7), even without invoking any trigger. Conversely, further reactivations of the landslide must be related to triggers such as earthquakes, rainfall and anthropogenic activities. This failure involved a portion of the slope where a plasticity zone resulted from mass rock creep that evolved with a maximum strain rate of 40% per thousand years, after the formation of a river strath terrace. This study demonstrates that the multi-modelling approach presented herein is a useful

  19. Continuum and discrete approach in modeling biofilm development and structure: a review.

    Science.gov (United States)

    Mattei, M R; Frunzo, L; D'Acunto, B; Pechaud, Y; Pirozzi, F; Esposito, G

    2018-03-01

    The scientific community has recognized that almost 99% of the microbial life on earth is represented by biofilms. Considering the impacts of their sessile lifestyle on both natural and human activities, extensive experimental activity has been carried out to understand how biofilms grow and interact with the environment. Many mathematical models have also been developed to simulate and elucidate the main processes characterizing the biofilm growth. Two main mathematical approaches for biomass representation can be distinguished: continuum and discrete. This review is aimed at exploring the main characteristics of each approach. Continuum models can simulate the biofilm processes in a quantitative and deterministic way. However, they require a multidimensional formulation to take into account the biofilm spatial heterogeneity, which makes the models quite complicated, requiring significant computational effort. Discrete models are more recent and can represent the typical multidimensional structural heterogeneity of biofilm reflecting the experimental expectations, but they generate computational results including elements of randomness and introduce stochastic effects into the solutions.

  20. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Gaurav [Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States); Raju, Mandhapati P. [General Motor R and D Tech Center, Warren, MI 48090 (United States); Sung, Chih-Jen [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269 (United States)

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluid dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)

  1. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  2. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  3. Why we need new approaches to low-dose risk modeling

    International Nuclear Information System (INIS)

    Alvarez, J.L.; Seiler, F.A.

    1996-01-01

    The linear no-threshold model for radiation effects was introduced as a conservative model for the design of radiation protection programs. The model has persisted not only as the basis for such programs, but has come to be treated as a dogma and is often confused with scientific fact. In this examination a number of serious problems with the linear no-threshold model of radiation carcinogenesis were demonstrated, many of them invalidating the hypothesis. It was shown that the relative risk formalism did not approach 1 as the dose approaches zero. When morality ratios were used instead, the data in the region below 0.3 Sv were systematically below the predictions of the linear model. It was also shown that the data above 0.3 Sv were of little use in formulating a model at low doses. In addition, these data are valid only for doses accumulated at high dose rates, and there is no scientific justification for using the model in low-dose, low-dose-rate extrapolations for purposes of radiation protection. Further examination of model fits to the Japanese survivor data were attempted. Several such models were fit to the data including an unconstrained linear, linear-square root, and Weibull, all of which fit the data better than the relative risk, linear no-threshold model. These fits were used to demonstrate that the linear model systematically over estimates the risk at low doses in the Japanese survivor data set. It is recommended here that an unbiased re-analysis of the data be undertaken and the results used to construct a new model, based on all pertinent data. This model could then form the basis for managing radiation risks in the appropriate regions of dose and dose rate

  4. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  5. Health and budget impact of combined HIV prevention - first results of the BELHIVPREV model.

    Science.gov (United States)

    Vermeersch, Sebastian; Callens, Steven; De Wit, Stéphane; Goffard, Jean-Christophe; Laga, Marie; Van Beckhoven, Dominique; Annemans, Lieven

    2018-02-01

    We developed a pragmatic modelling approach to estimate the impact of treatment as prevention (TasP); outreach testing strategies; and pre-exposure prophylaxis (PrEP) on the epidemiology of HIV and its associated pharmaceutical expenses. Our model estimates the incremental health (in terms of new HIV diagnoses) and budget impact of two prevention scenarios (outreach+TasP and outreach+TasP+PrEP) against a 'no additional prevention' scenario. Model parameters were estimated from reported Belgian epidemiology and literature data. The analysis was performed from a healthcare payer perspective with a 15-year-time horizon. It considers subpopulation differences, HIV infections diagnosed in Belgium having occurred prior to migration, and the effects of an ageing HIV population. Without additional prevention measures, the annual number of new HIV diagnoses rises to over 1350 new diagnoses in 2030 as compared to baseline, resulting in a budget expenditure of €260.5 million. Implementation of outreach+TasP and outreach+TasP+PrEP results in a decrease in the number of new HIV diagnoses to 865 and 663 per year, respectively. Respective budget impacts decrease by €20.6 million and €33.7 million. Foregoing additional investments in prevention is not an option. An approach combining TasP, outreach and PrEP is most effective in reducing the number of new HIV diagnoses and the HIV treatment budget. Our model is the first pragmatic HIV model in Belgium estimating the consequences of a combined preventive approach on the HIV epidemiology and its economic burden assuming other prevention efforts such as condom use and harm reduction strategies remain the same.

  6. Time shift in slope failure prediction between unimodal and bimodal modeling approaches

    Science.gov (United States)

    Ciervo, Fabio; Casini, Francesca; Nicolina Papa, Maria; Medina, Vicente

    2016-04-01

    Together with the need to use more appropriate mathematical expressions for describing hydro-mechanical soil processes, a challenge issue relates to the need of considering the effects induced by terrain heterogeneities on the physical mechanisms, taking into account the implications of the heterogeneities in affecting time-dependent hydro-mechanical variables, would improve the prediction capacities of models, such as the ones used in early warning systems. The presence of the heterogeneities in partially-saturated slopes results in irregular propagation of the moisture and suction front. To mathematically represent the "dual-implication" generally induced by the heterogeneities in describing the hydraulic terrain behavior, several bimodal hydraulic models have been presented in literature and replaced the conventional sigmoidal/unimodal functions; this presupposes that the scale of the macrostructure is comparable with the local scale (Darcy scale), thus the Richards' model can be assumed adequate to mathematically reproduce the processes. The purpose of this work is to focus on the differences in simulating flow infiltration processes and slope stability conditions originated from preliminary choices of hydraulic models and contextually between different approaches to evaluate the factor of safety (FoS). In particular, the results of two approaches are compared. The first one includes the conventional expression of the FoS under saturated conditions and the widespread used hydraulic model of van Genuchten-Mualem. The second approach includes a generalized FoS equation for infinite-slope model under variably saturated soil conditions (Lu and Godt, 2008) and the bimodal Romano et al.'s (2011) functions to describe the hydraulic response. The extension of the above mentioned approach to the bimodal context is based on an analytical method to assess the effects of the hydraulic properties on soil shear developed integrating a bimodal lognormal hydraulic function

  7. Use case driven approach to develop simulation model for PCS of APR1400 simulator

    International Nuclear Information System (INIS)

    Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang; Byung Hwan, Bae

    2006-01-01

    The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and the resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)

  8. A modified dynamic evolving neural-fuzzy approach to modeling customer satisfaction for affective design.

    Science.gov (United States)

    Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  9. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    Science.gov (United States)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  10. A Modified Dynamic Evolving Neural-Fuzzy Approach to Modeling Customer Satisfaction for Affective Design

    Directory of Open Access Journals (Sweden)

    C. K. Kwong

    2013-01-01

    Full Text Available Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1 the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS failed to run due to a large number of inputs; (2 the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  11. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    Science.gov (United States)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements

  12. Employment Effects of Renewable Energy Expansion on a Regional Level—First Results of a Model-Based Approach for Germany

    Directory of Open Access Journals (Sweden)

    Ulrike Lehr

    2012-02-01

    Full Text Available National studies have shown that both gross and net effects of the expansion of energy from renewable sources on employment are positive for Germany. These modeling approaches also revealed that this holds true for both present and future perspectives under certain assumptions on the development of exports, fossil fuel prices and national politics. Yet how are employment effects distributed within Germany? What components contribute to growth impacts on a regional level? To answer these questions (new methods of regionalization were explored and developed for the example “wind energy onshore” for Germany’s federal states. The main goal was to develop a methodology which is applicable to all renewable energy technologies in future research. For the quantification and projection, it was necessary to distinguish between jobs generated by domestic investments and exports on the one hand, and jobs for operation and maintenance of existing plants on the other hand. Further, direct and indirect employment is analyzed. The results show, that gross employment is particularly high in the northwestern regions of Germany. However, especially the indirect effects are spread out over the whole country. Regions in the south not only profit from the delivery of specific components, but also from other industry and service inputs.

  13. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  14. Population models for Greater Snow Geese: a comparison of different approaches to assess potential impacts of harvest

    Directory of Open Access Journals (Sweden)

    Gauthier, G.

    2004-06-01

    Full Text Available Demographic models, which are a natural extension of capture-recapture (CR methodology, are a powerful tool to guide decisions when managing wildlife populations. We compare three different modelling approaches to evaluate the effect of increased harvest on the population growth of Greater Snow Geese (Chen caerulescens atlantica. Our first approach is a traditional matrix model where survival was reduced to simulate increased harvest. We included environmental stochasticity in the matrix projection model by simulating good, average, and bad years to account for the large inter-annual variation in fecundity and first-year survival, a common feature of birds nesting in the Arctic. Our second approach is based on the elasticity (or relative sensitivity of population growth rate (lambda to changes in survival as simple functions of generation time. Generation time was obtained from the mean transition matrix based on the observed proportion of good, average and bad years between 1985 and 1998. If we assume that hunting mortality is additive to natural mortality, then a simple formula predicts changes in lambda as a function of changes in harvest rate. This second approach can be viewed as a simplification of the matrix model because it uses formal sensitivity results derived from population projection. Our third, and potentially more powerful approach, uses the Kalman Filter to combine information on demographic parameters, i.e. the population mechanisms summarized in a transition matrix model, and the census information (i.e. annual survey within an overall Gaussian likelihood. The advantage of this approach is that it minimizes process and measured uncertainties associated with both the census and demographic parameters based on the variance of each estimate. This third approach, in contrast to the second, can be viewed as an extension of the matrix model, by combining its results with the independent census information.

  15. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  16. Magnetic field approaches in dc thermal plasma modelling

    International Nuclear Information System (INIS)

    Freton, P; Gonzalez, J J; Masquere, M; Reichert, Frank

    2011-01-01

    The self-induced magnetic field has an important role in thermal plasma configurations generated by electric arcs as it generates velocity through Lorentz forces. In the models a good representation of the magnetic field is thus necessary. Several approaches exist to calculate the self-induced magnetic field such as the Maxwell-Ampere formulation, the vector potential approach combined with different kinds of boundary conditions or the Biot and Savart (B and S) formulation. The calculation of the self-induced magnetic field is alone a difficult problem and only few papers of the thermal plasma community speak on this subject. In this study different approaches with different boundary conditions are applied on two geometries to compare the methods and their limitations. The calculation time is also one of the criteria for the choice of the method and a compromise must be found between method precision and computation time. The study shows the importance of the current carrying path representation in the electrode on the deduced magnetic field. The best compromise consists of using the B and S formulation on the walls and/or edges of the calculation domain to determine the boundary conditions and to solve the vector potential in a 2D system. This approach provides results identical to those obtained using the B and S formulation over the entire domain but with a considerable decrease in calculation time.

  17. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  18. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems

    National Research Council Canada - National Science Library

    Qureshi, Zahid H

    2008-01-01

    .... This report provides a review of key traditional accident modelling approaches and their limitations, and describes new system-theoretic approaches to the modelling and analysis of accidents in safety-critical systems...

  19. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  20. Approaches to surface complexation modeling of Uranium(VI) adsorption on aquifer sediments

    Science.gov (United States)

    Davis, J.A.; Meece, D.E.; Kohler, M.; Curtis, G.P.

    2004-01-01

    Uranium(VI) adsorption onto aquifer sediments was studied in batch experiments as a function of pH and U(VI) and dissolved carbonate concentrations in artificial groundwater solutions. The sediments were collected from an alluvial aquifer at a location upgradient of contamination from a former uranium mill operation at Naturita, Colorado (USA). The ranges of aqueous chemical conditions used in the U(VI) adsorption experiments (pH 6.9 to 7.9; U(VI) concentration 2.5 ?? 10-8 to 1 ?? 10-5 M; partial pressure of carbon dioxide gas 0.05 to 6.8%) were based on the spatial variation in chemical conditions observed in 1999-2000 in the Naturita alluvial aquifer. The major minerals in the sediments were quartz, feldspars, and calcite, with minor amounts of magnetite and clay minerals. Quartz grains commonly exhibited coatings that were greater than 10 nm in thickness and composed of an illite-smectite clay with occluded ferrihydrite and goethite nanoparticles. Chemical extractions of quartz grains removed from the sediments were used to estimate the masses of iron and aluminum present in the coatings. Various surface complexation modeling approaches were compared in terms of the ability to describe the U(VI) experimental data and the data requirements for model application to the sediments. Published models for U(VI) adsorption on reference minerals were applied to predict U(VI) adsorption based on assumptions about the sediment surface composition and physical properties (e.g., surface area and electrical double layer). Predictions from these models were highly variable, with results overpredicting or underpredicting the experimental data, depending on the assumptions used to apply the model. Although the models for reference minerals are supported by detailed experimental studies (and in ideal cases, surface spectroscopy), the results suggest that errors are caused in applying the models directly to the sediments by uncertain knowledge of: 1) the proportion and types of

  1. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    Science.gov (United States)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  2. Simple Heuristic Approach to Introduction of the Black-Scholes Model

    Science.gov (United States)

    Yalamova, Rossitsa

    2010-01-01

    A heuristic approach to explaining of the Black-Scholes option pricing model in undergraduate classes is described. The approach draws upon the method of protocol analysis to encourage students to "think aloud" so that their mental models can be surfaced. It also relies upon extensive visualizations to communicate relationships that are…

  3. Process-Product Approach to Writing: the Effect of Model Essays on EFL Learners’ Writing Accuracy

    Directory of Open Access Journals (Sweden)

    Parastou Gholami Pasand

    2013-01-01

    Full Text Available Writing is one the most important skills in learning a foreign language. The significance of being able to write in a second or foreign language has become clearer nowadays. Accordingly, different approaches to writing such as product approach, process approach and more recently process-product approach came into existence and they have been the concern of SL/FL researchers. The aim of this study is to answer the question that whether the use of an incomplete model text in process-product approach to writing and asking the learners to complete the text rather than copying it can have a positive impact on EFL learners’ accuracy in writing. After training a number of EFL learners on using process approach, we held a two-session writing class. In the first session students wrote in the process approach, and in the second one they were given a model text to continue in the process-product approach. The writing performance of the students in these two sessions was compared in term of accuracy. Based on the students’ writing performance, we came to the conclusion that completing the model text in process-product writing can have a rather positive influence in some aspects of their writing accuracy such as punctuation, capitalization, spelling, subject-verb agreement, tense, the use of connectors, using correct pronouns and possessives. Also the results of the paired t-test indicate that using a model text to continue increased students’ writing accuracy.

  4. Replacement model of city bus: A dynamic programming approach

    Science.gov (United States)

    Arifin, Dadang; Yusuf, Edhi

    2017-06-01

    This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.

  5. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  6. Principles and Approaches for Numerical Modelling of Sediment Transport in Sewers

    DEFF Research Database (Denmark)

    Mark, Ole; Appelgren, Cecilia; Larsen, Torben

    1995-01-01

    A study has been carried out with the objectives of describing the effect of sediment deposits on the hydraulic capacity of sewer systems and to investigate the sediment transport in sewer systems. A result of the study is a mathematical model MOUSE ST which describes sediment transport in sewers....... This paper discusses the applicability and the limitations of various modelling approaches and sediment transport formulations in in MOUSE ST. Further, the paper presents a simple application of MOUSE ST to the Rya catchment in Gothenburg, Sweden....

  7. A new approach to integrate seismic and production data in reservoir models

    Energy Technology Data Exchange (ETDEWEB)

    Ouenes, A.; Chawathe, A.; Weiss, W. [New Mexico Tech, Socorro, NM (United States)] [and others

    1997-08-01

    A great deal of effort is devoted to reducing the uncertainties in reservoir modeling. For example, seismic properties are used to improve the characterization of interwell properties by providing porosity maps constrained to seismic impedance. Another means to reduce uncertainties is to constrain the reservoir model to production data. This paper describes a new approach where the production and seismic data are simultaneously used to reduce the uncertainties. In this new approach, the primary geologic parameter that controls reservoir properties is identified. Next, the geophysical parameter that is sensitive to the dominant geologic parameter is determined. Then the geology and geophysics are linked using analytic correlations. Unfortunately, the initial guess resulted in a reservoir model that did not match the production history. Since the time required for trial and error matching of production history is exorbitant, an automatic history matching method based on a fast optimization method was used to find the correlating parameters. This new approach was illustrated with an actual field in the Williston Basin. Upscalling problems do not arise since the scale is imposed by the size of the seismic bin (66m, 219 ft) which is the size of the simulator gridblocks.

  8. Comprehensive assessment of energy systems: approach and current results of the Swiss activities

    International Nuclear Information System (INIS)

    Hirschberg, S.; Dones, R.; Kypreos, S.

    1994-01-01

    This paper provides an overview of the approaches used and results obtained to this date within the Swiss Project GaBE on ''Comprehensive Assessment of Energy Systems''. Based on the ''cradle to grave'' approach detailed environmental inventories for major fuel cycles have been generated. In comparison to earlier studies a very broad spectrum of resources and air and water pollutants has been covered. Also non-energetic resources such as land depreciation have been considered. Numerous examples of evaluations are provided in the paper, including comparisons of greenhouse gas emissions, land use, radiation and wastes, and illustrating the impact of consideration of full energy chains. In the part concerning severe accidents some evaluations based on the database established as the Paul Scherrer Institute are presented as well as the estimated contribution of hypothetical severe accidents to the external costs associated with a specific Swiss nuclear power plant. Results of applications of the large scale energy-economy model MARKAL to the Swiss energy system and greenhouse gas scenarios are described. This includes cost-optimal contributions of different technologies to reduce CO 2 emissions, and trade-offs on the national and international level. Finally, the content of other GaBE activities either being in progress or planned is provided. (orig.)

  9. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  10. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  11. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  12. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  13. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...

  14. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  15. Reconstructing Holocene climate using a climate model: Model strategy and preliminary results

    Science.gov (United States)

    Haberkorn, K.; Blender, R.; Lunkeit, F.; Fraedrich, K.

    2009-04-01

    An Earth system model of intermediate complexity (Planet Simulator; PlaSim) is used to reconstruct Holocene climate based on proxy data. The Planet Simulator is a user friendly general circulation model (GCM) suitable for palaeoclimate research. Its easy handling and the modular structure allow for fast and problem dependent simulations. The spectral model is based on the moist primitive equations conserving momentum, mass, energy and moisture. Besides the atmospheric part, a mixed layer-ocean with sea ice and a land surface with biosphere are included. The present-day climate of PlaSim, based on an AMIP II control-run (T21/10L resolution), shows reasonable agreement with ERA-40 reanalysis data. Combining PlaSim with a socio-technological model (GLUES; DFG priority project INTERDYNAMIK) provides improved knowledge on the shift from hunting-gathering to agropastoral subsistence societies. This is achieved by a data assimilation approach, incorporating proxy time series into PlaSim to initialize palaeoclimate simulations during the Holocene. For this, the following strategy is applied: The sensitivities of the terrestrial PlaSim climate are determined with respect to sea surface temperature (SST) anomalies. Here, the focus is the impact of regionally varying SST both in the tropics and the Northern Hemisphere mid-latitudes. The inverse of these sensitivities is used to determine the SST conditions necessary for the nudging of land and coastal proxy climates. Preliminary results indicate the potential, the uncertainty and the limitations of the method.

  16. Modelling efficient innovative work: integration of economic and social psychological approaches

    Directory of Open Access Journals (Sweden)

    Babanova Yulia

    2017-01-01

    Full Text Available The article deals with the relevance of integration of economic and social psychological approaches to the solution of enhancing the efficiency of innovation management. The content, features and specifics of the modelling methods within each of approaches are unfolded and options of integration are considered. The economic approach lies in the generation of the integrated matrix concept of management of innovative development of an enterprise in line with the stages of innovative work and the use of the integrated vector method for the evaluation of the innovative enterprise development level. The social psychological approach lies in the development of a system of psychodiagnostic indexes of activity resources within the scope of psychological innovative audit of enterprise management and development of modelling methods for the balance of activity trends. Modelling the activity resources is based on the system of equations accounting for the interaction type of psychodiagnostic indexes. Integration of two approaches includes a methodological level, a level of empirical studies and modelling methods. There are suggested options of integrating the economic and psychological approaches to analyze available material and non-material resources of the enterprises’ innovative work and to forecast an optimal option of development based on the implemented modelling methods.

  17. The Methodical Approach to Assessment of Enterprise Activity on the Basis of its Models, Based on the Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Minenkova Olena V.

    2017-12-01

    Full Text Available The article proposes the methodical approach to assessment of activity of enterprise on the basis of its models, based on the balanced scorecard. The content is presented and the following components of the methodical approach are formed: tasks, input information, list of methods and models, as well as results. Implementation of this methodical approach provides improvement of management and increase of results of enterprise activity. The place of assessment models in management of enterprise activity and formation of managerial decision has been defined. Recommendations as to the operations of decision-making procedures to increase the efficiency of enterprise have been provided.

  18. A Bayesian network approach for modeling local failure in lung cancer

    International Nuclear Information System (INIS)

    Oh, Jung Hun; Craft, Jeffrey; Al Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; El Naqa, Issam

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  19. Functional computed tomography imaging of tumor-induced angiogenesis. Preliminary results of new tracer kinetic modeling using a computer discretization approach

    International Nuclear Information System (INIS)

    Kaneoya, Katsuhiko; Ueda, Takuya; Suito, Hiroshi

    2008-01-01

    The aim of this study was to establish functional computed tomography (CT) imaging as a method for assessing tumor-induced angiogenesis. Functional CT imaging was mathematically analyzed for 14 renal cell carcinomas by means of two-compartment modeling using a computer-discretization approach. The model incorporated diffusible kinetics of contrast medium including leakage from the capillary to the extravascular compartment and back-flux to the capillary compartment. The correlations between functional CT parameters [relative blood volume (rbv), permeability 1 (Pm1), and permeability 2 (Pm2)] and histopathological markers of angiogenesis [microvessel density (MVD) and vascular endothelial growth factor (VEGF)] were statistically analyzed. The modeling was successfully performed, showing similarity between the mathematically simulated curve and the measured time-density curve. There were significant linear correlations between MVD grade and Pm1 (r=0.841, P=0.001) and between VEGF grade and Pm2 (r=0.804, P=0.005) by Pearson's correlation coefficient. This method may be a useful tool for the assessment of tumor-induced angiogenesis. (author)

  20. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  1. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  2. New results in the Dual Parton Model

    International Nuclear Information System (INIS)

    Van, J.T.T.; Capella, A.

    1984-01-01

    In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions

  3. Generalised additive modelling approach to the fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  4. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  5. An approach to modelling radiation damage by fast ionizing particles

    International Nuclear Information System (INIS)

    Thomas, G.E.

    1987-01-01

    The paper presents a statistical approach to modelling radiation damage in small biological structures such as enzymes, viruses, and some cells. Irreparable damage is assumed to be caused by the occurrence of ionizations within sensitive regions. For structures containing double-stranded DNA, one or more ionizations occurring within each strand of the DNA will cause inactivation; for simpler structures without double-stranded DNA a single ionization within the structure will be sufficient for inactivation. Damaging ionizations occur along tracks of primary irradiating particles or along tracks of secondary particles released at primary ionizations. An inactivation probability is derived for each damage mechanism, expressed in integral form in terms of the radius of the biological structure (assumed spherical), rate of ionization along primary tracks, and maximum energy for secondary particles. The performance of each model is assessed by comparing results from the model with those derived from data from various experimental studies extracted from the literature. For structures where a single ionization is sufficient for inactivation, the model gives qualitatively promising results; for larger more complex structures containing double-stranded DNA, the model requires further refinements. (author)

  6. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  7. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  8. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  9. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  10. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  11. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  12. A coupled model approach to reduce nonpoint-source pollution resulting from predicted urban growth: A case study in the Ambos Nogales watershed

    Science.gov (United States)

    Norman, L.M.; Guertin, D.P.; Feller, M.

    2008-01-01

    The development of new approaches for understanding processes of urban development and their environmental effects, as well as strategies for sustainable management, is essential in expanding metropolitan areas. This study illustrates the potential of linking urban growth and watershed models to identify problem areas and support long-term watershed planning. Sediment is a primary source of nonpoint-source pollution in surface waters. In urban areas, sediment is intermingled with other surface debris in transport. In an effort to forecast the effects of development on surface-water quality, changes predicted in urban areas by the SLEUTH urban growth model were applied in the context of erosion-sedimentation models (Universal Soil Loss Equation and Spatially Explicit Delivery Models). The models are used to simulate the effect of excluding hot-spot areas of erosion and sedimentation from future urban growth and to predict the impacts of alternative erosion-control scenarios. Ambos Nogales, meaning 'both Nogaleses,' is a name commonly used for the twin border cities of Nogales, Arizona and Nogales, Sonora, Mexico. The Ambos Nogales watershed has experienced a decrease in water quality as a result of urban development in the twin-city area. Population growth rates in Ambos Nogales are high and the resources set in place to accommodate the rapid population influx will soon become overburdened. Because of its remote location and binational governance, monitoring and planning across the border is compromised. One scenario described in this research portrays an improvement in water quality through the identification of high-risk areas using models that simulate their protection from development and replanting with native grasses, while permitting the predicted and inevitable growth elsewhere. This is meant to add to the body of knowledge about forecasting the impact potential of urbanization on sediment delivery to streams for sustainable development, which can be

  13. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  14. Principles and approaches for numerical modelling of sediment transport in sewers

    DEFF Research Database (Denmark)

    Mark, Ole; Larsen, Torben; Appelgren, Cecilia

    1994-01-01

    model MOUSE ST which describes the sediment transport in sewers. This paper discusses the applicability and the limitations of various modelling approaches and sediment transport formulations in MOUSE ST. The study was founded by the Swedish Water and Waste Works Association and the Nordic Industrial......A study has been carried out at the University of Aalborg, Denmark and VBB VIAK, Sweden with the objectives to describe the effect of sediment deposits on the hydraulic capacity of sewer systems and to investigate the sediment transport in sewer systems. A results of the study is a mathematical...

  15. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  16. It's the parameters, stupid! Moving beyond multi-model and multi-physics approaches to characterize and reduce predictive uncertainty in process-based hydrological models

    Science.gov (United States)

    Clark, Martyn; Samaniego, Luis; Freer, Jim

    2014-05-01

    Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic

  17. Thermal radiation transfer calculations in combustion fields using the SLW model coupled with a modified reference approach

    Science.gov (United States)

    Darbandi, Masoud; Abrar, Bagher

    2018-01-01

    The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.

  18. An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.

    Science.gov (United States)

    Wong, Rowena Syn Yin; Ismail, Noor Azina

    2016-01-01

    There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.

  19. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  20. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  1. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  2. Biotic interactions in the face of climate change: a comparison of three modelling approaches.

    Directory of Open Access Journals (Sweden)

    Anja Jaeschke

    Full Text Available Climate change is expected to alter biotic interactions, and may lead to temporal and spatial mismatches of interacting species. Although the importance of interactions for climate change risk assessments is increasingly acknowledged in observational and experimental studies, biotic interactions are still rarely incorporated in species distribution models. We assessed the potential impacts of climate change on the obligate interaction between Aeshna viridis and its egg-laying plant Stratiotes aloides in Europe, based on an ensemble modelling technique. We compared three different approaches for incorporating biotic interactions in distribution models: (1 We separately modelled each species based on climatic information, and intersected the future range overlap ('overlap approach'. (2 We modelled the potential future distribution of A. viridis with the projected occurrence probability of S. aloides as further predictor in addition to climate ('explanatory variable approach'. (3 We calibrated the model of A. viridis in the current range of S. aloides and multiplied the future occurrence probabilities of both species ('reference area approach'. Subsequently, all approaches were compared to a single species model of A. viridis without interactions. All approaches projected a range expansion for A. viridis. Model performance on test data and amount of range gain differed depending on the biotic interaction approach. All interaction approaches yielded lower range gains (up to 667% lower than the model without interaction. Regarding the contribution of algorithm and approach to the overall uncertainty, the main part of explained variation stems from the modelling algorithm, and only a small part is attributed to the modelling approach. The comparison of the no-interaction model with the three interaction approaches emphasizes the importance of including obligate biotic interactions in projective species distribution modelling. We recommend the use of

  3. Top-down approach to unified supergravity models

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-03-01

    We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)

  4. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  5. Evaluation of different approaches for modeling effects of acid rain on soils in China

    International Nuclear Information System (INIS)

    Larssen, T.; Schnoor, J.L.; Seip, H.M.; Dawei, Z.

    2000-01-01

    Acid deposition is an environmental problem of increasing concern in China. Acidic soils are common in the southern part of the country and soil acidification caused by acid deposition is expected to occur. Here we test and apply two different approaches for modeling effects of acid deposition and compare results with observed data from sites throughout southern China. The dynamic model MAGIC indicates that, during the last few decades, soil acidification rates have increased considerably due to acid deposition. This acidification will continue if sulfur deposition is not reduced, and if reduced more rapidly than base cation deposition. With the Steady State Mass Balance model (SSMB), and assuming that a molar ratio of Ca 2+ /Al 3+ <1 in soil water is harmful to vegetation, we estimate a slight probability for exceedance of the critical load for present deposition rates. Results from both modeling approaches show a strong dependence with deposition of base cations as well as sulfur. Hence, according to the models, changes in emission control of alkaline particulate matter prior to sulfur dioxide will be detrimental to the environment. Model calculations are, however, uncertain, particularly because available data on base cation deposition fluxes are scarce, and that model formulation of aluminum chemistry does not fully reproduce observations. An effort should be made to improve our present knowledge regarding deposition fluxes. Improvements to the model are suggested. Our work indicates that the critical loads presented in the regional acid deposition assessment model RAINS-Asia are too stringent. We find weaknesses in the SSMB approach, developed for northern European conditions, when applying it to Chinese conditions. We suggest an improved effort to revise the risk parameters for use in critical load estimates in China

  6. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  7. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  8. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 1: The participatory modeling approach

    Science.gov (United States)

    Scheer, Dirk; Konrad, Wilfried; Class, Holger; Kissinger, Alexander; Knopf, Stefan; Noack, Vera

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the potential hazards associated with the geological storage of CO2. Thus, in a site selection process, models for predicting the fate of the displaced brine are required, for example, for a risk assessment or the optimization of pressure management concepts. From the very beginning, this research on brine migration aimed at involving expert and stakeholder knowledge and assessment in simulating the impacts of injecting CO2 into deep saline aquifers by means of a participatory modeling process. The involvement exercise made use of two approaches. First, guideline-based interviews were carried out, aiming at eliciting expert and stakeholder knowledge and assessments of geological structures and mechanisms affecting CO2-induced brine migration. Second, a stakeholder workshop including the World Café format yielded evaluations and judgments of the numerical modeling approach, scenario selection, and preliminary simulation results. The participatory modeling approach gained several results covering brine migration in general, the geological model sketch, scenario development, and the review of the preliminary simulation results. These results were included in revised versions of both the geological model and the numerical model, helping to improve the analysis of regional-scale brine migration along vertical pathways due to CO2 injection.

  9. Modelling approach for anisotropic inter-ply slippage in finite element forming simulation of thermoplastic UD-tapes

    Science.gov (United States)

    Dörr, Dominik; Faisst, Markus; Joppich, Tobias; Poppe, Christian; Henning, Frank; Kärger, Luise

    2018-05-01

    Finite Element (FE) forming simulation offers the possibility of a detailed analysis of thermoforming processes by means of constitutive modelling of intra- and inter-ply deformation mechanisms, which makes manufacturing defects predictable. Inter-ply slippage is a deformation mechanism, which influences the forming behaviour and which is usually assumed to be isotropic in FE forming simulation so far. Thus, the relative (fibre) orientation between the slipping plies is neglected for modelling of frictional behaviour. Characterization results, however, reveal a dependency of frictional behaviour on the relative orientation of the slipping plies. In this work, an anisotropic model for inter-ply slippage is presented, which is based on an FE forming simulation approach implemented within several user subroutines of the commercially available FE solver Abaqus. This approach accounts for the relative orientation between the slipping plies for modelling frictional behaviour. For this purpose, relative orientation of the slipping plies is consecutively evaluated, since it changes during forming due to inter-ply slipping and intra-ply shearing. The presented approach is parametrized based on characterization results with and without relative orientation for a thermoplastic UD-tape (PA6-CF) and applied to forming simulation of a generic geometry. Forming simulation results reveal an influence of the consideration of relative fibre orientation on the simulation results. This influence, however, is small for the considered geometry.

  10. Using the mean approach in pooling cross-section and time series data for regression modelling

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1989-12-01

    The mean approach is one of the methods for pooling cross section and time series data for mathematical-statistical modelling. Though a simple approach, its results are sometimes paradoxical in nature. However, researchers still continue using it for its simplicity. Here, the paper investigates the nature and source of such unwanted phenomena. (author). 7 refs

  11. Leader communication approaches and patient safety: An integrated model.

    Science.gov (United States)

    Mattson, Malin; Hellgren, Johnny; Göransson, Sara

    2015-06-01

    Leader communication is known to influence a number of employee behaviors. When it comes to the relationship between leader communication and safety, the evidence is more scarce and ambiguous. The aim of the present study is to investigate whether and in what way leader communication relates to safety outcomes. The study examines two leader communication approaches: leader safety priority communication and feedback to subordinates. These approaches were assumed to affect safety outcomes via different employee behaviors. Questionnaire data, collected from 221 employees at two hospital wards, were analyzed using structural equation modeling. The two examined communication approaches were both positively related to safety outcomes, although leader safety priority communication was mediated by employee compliance and feedback communication by organizational citizenship behaviors. The findings suggest that leader communication plays a vital role in improving organizational and patient safety and that different communication approaches seem to positively affect different but equally essential employee safety behaviors. The results highlights the necessity for leaders to engage in one-way communication of safety values as well as in more relational feedback communication with their subordinates in order to enhance patient safety. Copyright © 2015 Elsevier Ltd. and National Safety Council. Published by Elsevier Ltd. All rights reserved.

  12. Evaluating the improvements of the BOLAM meteorological model operational at ISPRA: A case study approach - preliminary results

    Science.gov (United States)

    Mariani, S.; Casaioli, M.; Lastoria, B.; Accadia, C.; Flavoni, S.

    2009-04-01

    Fritsch. A fully updated serial version of the BOLAM code has been recently acquired. Code improvements include a more precise advection scheme (Weighted Average Flux); explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes (also with possible choice among different available schemes). The operational implementation of the new code into the SIMM model chain, which requires the development of a parallel version, will be achieved during 2009. In view of this goal, the comparative verification of the different model versions' skill represents a fundamental task. On this purpose, it has been decided to evaluate the performance improvement of the new BOLAM code (in the available serial version, hereinafter BOLAM 2007) with respect to the version with the Kain-Fritsch scheme (hereinafter KF version) and to the older one employing the Kuo scheme (hereinafter Kuo version). In the present work, verification of precipitation forecasts from the three BOLAM versions is carried on in a case study approach. The intense rainfall episode occurred on 10th - 17th December 2008 over Italy has been considered. This event produced indeed severe damages in Rome and its surrounding areas. Objective and subjective verification methods have been employed in order to evaluate model performance against an observational dataset including rain gauge observations and satellite imagery. Subjective comparison of observed and forecast precipitation fields is suitable to give an overall description of the forecast quality. Spatial errors (e.g., shifting and pattern errors) and rainfall volume error can be assessed quantitatively by means of object-oriented methods. By comparing satellite images with model forecast fields, it is possible to investigate the differences between the evolution of the observed weather system and the predicted ones, and its sensitivity to the improvements in the model code

  13. A Partial Join Approach for Mining Co-Location Patterns: A Summary of Results

    National Research Council Canada - National Science Library

    Yoo, Jin S; Shekhar, Shashi

    2005-01-01

    .... They propose a novel partial-join approach for mining co-location patterns efficiently. It transactionizes continuous spatial data while keeping track of the spatial information not modeled by transactions...

  14. A new approach for modeling the peak utility impacts from a proposed CUAC standard

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina Hamachi; Gumerman, Etan; Marnay, Chris; Chan, Peter; Coughlin, Katie

    2004-08-01

    This report describes a new Berkeley Lab approach for modeling the likely peak electricity load reductions from proposed energy efficiency programs in the National Energy Modeling System (NEMS). This method is presented in the context of the commercial unitary air conditioning (CUAC) energy efficiency standards. A previous report investigating the residential central air conditioning (RCAC) load shapes in NEMS revealed that the peak reduction results were lower than expected. This effect was believed to be due in part to the presence of the squelch, a program algorithm designed to ensure changes in the system load over time are consistent with the input historic trend. The squelch applies a system load-scaling factor that scales any differences between the end-use bottom-up and system loads to maintain consistency with historic trends. To obtain more accurate peak reduction estimates, a new approach for modeling the impact of peaky end uses in NEMS-BT has been developed. The new approach decrements the system load directly, reducing the impact of the squelch on the final results. This report also discusses a number of additional factors, in particular non-coincidence between end-use loads and system loads as represented within NEMS, and their impacts on the peak reductions calculated by NEMS. Using Berkeley Lab's new double-decrement approach reduces the conservation load factor (CLF) on an input load decrement from 25% down to 19% for a SEER 13 CUAC trial standard level, as seen in NEMS-BT output. About 4 GW more in peak capacity reduction results from this new approach as compared to Berkeley Lab's traditional end-use decrement approach, which relied solely on lowering end use energy consumption. The new method has been fully implemented and tested in the Annual Energy Outlook 2003 (AEO2003) version of NEMS and will routinely be applied to future versions. This capability is now available for use in future end-use efficiency or other policy analysis

  15. Modeling and forecasting energy consumption for heterogeneous buildings using a physical–statistical approach

    International Nuclear Information System (INIS)

    Lü, Xiaoshu; Lu, Tao; Kibert, Charles J.; Viljanen, Martti

    2015-01-01

    Highlights: • This paper presents a new modeling method to forecast energy demands. • The model is based on physical–statistical approach to improving forecast accuracy. • A new method is proposed to address the heterogeneity challenge. • Comparison with measurements shows accurate forecasts of the model. • The first physical–statistical/heterogeneous building energy modeling approach is proposed and validated. - Abstract: Energy consumption forecasting is a critical and necessary input to planning and controlling energy usage in the building sector which accounts for 40% of the world’s energy use and the world’s greatest fraction of greenhouse gas emissions. However, due to the diversity and complexity of buildings as well as the random nature of weather conditions, energy consumption and loads are stochastic and difficult to predict. This paper presents a new methodology for energy demand forecasting that addresses the heterogeneity challenges in energy modeling of buildings. The new method is based on a physical–statistical approach designed to account for building heterogeneity to improve forecast accuracy. The physical model provides a theoretical input to characterize the underlying physical mechanism of energy flows. Then stochastic parameters are introduced into the physical model and the statistical time series model is formulated to reflect model uncertainties and individual heterogeneity in buildings. A new method of model generalization based on a convex hull technique is further derived to parameterize the individual-level model parameters for consistent model coefficients while maintaining satisfactory modeling accuracy for heterogeneous buildings. The proposed method and its validation are presented in detail for four different sports buildings with field measurements. The results show that the proposed methodology and model can provide a considerable improvement in forecasting accuracy

  16. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    Science.gov (United States)

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  17. Exact results for quantum chaotic systems and one-dimensional fermions from matrix models

    International Nuclear Information System (INIS)

    Simons, B.D.; Lee, P.A.; Altshuler, B.L.

    1993-01-01

    We demonstrate a striking connection between the universal parametric correlations of the spectra of quantum chaotic systems and a class of integrable quantum hamiltonians. We begin by deriving a non-perturbative expression for the universal m-point correlation function of the spectra of random matrix ensembles in terms of a non-linear supermatrix σ-model. These results are shown to coincide with those from previous studies of weakly disordered metallic systems. We then introduce a continuous matrix model which describes the quantum mechanics of the Sutherland hamiltonian describing particles interacting through an inverse-square pairwise potential. We demonstrate that a field theoretic approach can be employed to determine exact analytical expressions for correlations of the quantum hamiltonian. The results, which are expressed in terms of a non-linear σ-model, are shown to coincide with those for analogous correlation functions of random matrix ensembles after an appropriate change of variables. We also discuss possible generalizations of the matrix model to higher dimensions. These results reveal a common mathematical structure which underlies branches of theoretical physics ranging from continuous matrix models to strongly interacting quantum hamiltonians, and universalities in the spectra of quantum chaotic systems. (orig.)

  18. The thermoballistic transport model a novel approach to charge carrier transport in semiconductors

    CERN Document Server

    Lipperheide, Reinhard

    2014-01-01

    The book presents a comprehensive survey of the thermoballistic approach to charge carrier transport in semiconductors. This semi-classical approach, which the authors have developed over the past decade, bridges the gap between the opposing drift-diffusion and ballistic  models of carrier transport. While incorporating basic features of the latter two models, the physical concept underlying the thermoballistic approach constitutes a novel, unifying scheme. It is based on the introduction of "ballistic configurations" arising from a random partitioning of the length of a semiconducting sample into ballistic transport intervals. Stochastic averaging of the ballistic carrier currents over the ballistic configurations results in a position-dependent thermoballistic current, which is the key element of the thermoballistic concept and forms  the point of departure for the calculation of all relevant transport properties. In the book, the thermoballistic concept and its implementation are developed in great detai...

  19. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    Science.gov (United States)

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  20. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Tung, Chang-Shung; Faeder, James R; Lopez, Carlos F; Hlavacek, William S

    2014-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and posttranslational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). © 2013 Wiley Periodicals, Inc.

  1. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2012-01-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time. (paper)

  2. Designing water demand management schemes using a socio-technical modelling approach.

    Science.gov (United States)

    Baki, Sotiria; Rozos, Evangelos; Makropoulos, Christos

    2018-05-01

    Although it is now widely acknowledged that urban water systems (UWSs) are complex socio-technical systems and that a shift towards a socio-technical approach is critical in achieving sustainable urban water management, still, more often than not, UWSs are designed using a segmented modelling approach. As such, either the analysis focuses on the description of the purely technical sub-system, without explicitly taking into account the system's dynamic socio-economic processes, or a more interdisciplinary approach is followed, but delivered through relatively coarse models, which often fail to provide a thorough representation of the urban water cycle and hence cannot deliver accurate estimations of the hydrosystem's responses. In this work we propose an integrated modelling approach for the study of the complete socio-technical UWS that also takes into account socio-economic and climatic variability. We have developed an integrated model, which is used to investigate the diffusion of household water conservation technologies and its effects on the UWS, under different socio-economic and climatic scenarios. The integrated model is formed by coupling a System Dynamics model that simulates the water technology adoption process, and the Urban Water Optioneering Tool (UWOT) for the detailed simulation of the urban water cycle. The model and approach are tested and demonstrated in an urban redevelopment area in Athens, Greece under different socio-economic scenarios and policy interventions. It is suggested that the proposed approach can establish quantifiable links between socio-economic change and UWS responses and therefore assist decision makers in designing more effective and resilient long-term strategies for water conservation. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  4. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  5. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.X.; Wang, X.; Gao, Y.W., E-mail: ywgao@lzu.edu.cn; Zhou, Y.H.

    2013-11-15

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper.

  6. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    International Nuclear Information System (INIS)

    Li, Y.X.; Wang, X.; Gao, Y.W.; Zhou, Y.H.

    2013-01-01

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper

  7. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  8. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  9. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    Science.gov (United States)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that

  10. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  11. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  12. Engineering approach to model and compute electric power markets settlements

    International Nuclear Information System (INIS)

    Kumar, J.; Petrov, V.

    2006-01-01

    Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs

  13. Approach of regional gravity field modeling from GRACE data for improvement of geoid modeling for Japan

    Science.gov (United States)

    Kuroishi, Y.; Lemoine, F. G.; Rowlands, D. D.

    2006-12-01

    The latest gravimetric geoid model for Japan, JGEOID2004, suffers from errors at long wavelengths (around 1000 km) in a range of +/- 30 cm. The model was developed by combining surface gravity data with a global marine altimetric gravity model, using EGM96 as a foundation, and the errors at long wavelength are presumably attributed to EGM96 errors. The Japanese islands and their vicinity are located in a region of plate convergence boundaries, producing substantial gravity and geoid undulations in a wide range of wavelengths. Because of the geometry of the islands and trenches, precise information on gravity in the surrounding oceans should be incorporated in detail, even if the geoid model is required to be accurate only over land. The Kuroshio Current, which runs south of Japan, causes high sea surface variability, making altimetric gravity field determination complicated. To reduce the long-wavelength errors in the geoid model, we are investigating GRACE data for regional gravity field modeling at long wavelengths in the vicinity of Japan. Our approach is based on exclusive use of inter- satellite range-rate data with calibrated accelerometer data and attitude data, for regional or global gravity field recovery. In the first step, we calibrate accelerometer data in terms of scales and biases by fitting dynamically calculated orbits to GPS-determined precise orbits. The calibration parameters of accelerometer data thus obtained are used in the second step to recover a global/regional gravity anomaly field. This approach is applied to GRACE data obtained for the year 2005 and resulting global/regional gravity models are presented and discussed.

  14. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Science.gov (United States)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    calibration was tested by comparing the manual calibration approach with automatic calibrations of the VHM model based on different objective functions. The calibration approach did not significantly alter the model results for peak flow, but the low flow projections were again highly influenced. Model choice as well as calibration strategy hence have a critical impact on low flows, more than on peak flows. These results highlight the high uncertainty in low flow modelling, especially in a climate change context.

  15. A preliminary comparison of hydrodynamic approaches for flood inundation modeling of urban areas in Jakarta Ciliwung river basin

    Science.gov (United States)

    Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu

    2017-07-01

    This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.

  16. A fuzzy approach for modelling radionuclide in lake system

    International Nuclear Information System (INIS)

    Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.

    2013-01-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem

  17. Peer Review of Clinical Information Models: A Web 2.0 Crowdsourced Approach.

    Science.gov (United States)

    Leslie, Heather; Ljosland Bakke, Silje

    2017-01-01

    Over the past 8 years the openEHR Clinical Model program has been developing a Web 2.0 approach and tooling to support the development, review and governance of atomic clincial information models, known as archetypes. This paper describes the background and review process, and provides a practical example where cross standards organisation collaboration resulted in jointly agreed clinical content which was subsequently represented in different implementation formalisms that were effectively semantically aligned. The discussion and conclusions highlight some of the socio-technical benefits and challenges facing organisations who seek to govern automic clinical information models in a global and collaborative online community.

  18. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  19. Evaluation of Different Modeling Approaches to Simulate Contaminant Transport in a Fractured Limestone Aquifer

    Science.gov (United States)

    Mosthaf, K.; Rosenberg, L.; Balbarini, N.; Broholm, M. M.; Bjerg, P. L.; Binning, P. J.

    2014-12-01

    It is important to understand the fate and transport of contaminants in limestone aquifers because they are a major drinking water resource. This is challenging because they are highly heterogeneous; with micro-porous grains, flint inclusions, and being heavily fractured. Several modeling approaches have been developed to describe contaminant transport in fractured media, such as the discrete fracture (with various fracture geometries), equivalent porous media (with and without anisotropy), and dual porosity models. However, these modeling concepts are not well tested for limestone geologies. Given available field data and model purpose, this paper therefore aims to develop, examine and compare modeling approaches for transport of contaminants in fractured limestone aquifers. The model comparison was conducted for a contaminated site in Denmark, where a plume of a dissolved contaminant (PCE) has migrated through a fractured limestone aquifer. Multilevel monitoring wells have been installed at the site and available data includes information on spill history, extent of contamination, geology and hydrogeology. To describe the geology and fracture network, data from borehole logs was combined with an analysis of heterogeneities and fractures from a nearby excavation (analog site). Methods for translating the geological information and fracture mapping into each of the model concepts were examined. Each model was compared with available field data, considering both model fit and measures of model suitability. An analysis of model parameter identifiability and sensitivity is presented. Results show that there is considerable difference between modeling approaches, and that it is important to identify the right one for the actual scale and model purpose. A challenge in the use of field data is the determination of relevant hydraulic properties and interpretation of aqueous and solid phase contaminant concentration sampling data. Traditional water sampling has a bias

  20. Plant Growth Modeling Using L-System Approach and Its Visualization

    Directory of Open Access Journals (Sweden)

    Atris Suyantohadi

    2011-05-01

    Full Text Available The visualizationof plant growth modeling using computer simulation has rarely been conducted with Lindenmayer System (L-System approach. L-System generally has been used as framework for improving and designing realistic modeling on plant growth. It is one kind of tools for representing plant growth based on grammar sintax and mathematic formulation. This research aimed to design modeling and visualizing plant growth structure generated using L-System. The environment on modeling design used three dimension graphic on standart OpenGL format. The visualization on system design has been developed by some of L-System grammar, and the output graphic on three dimension reflected on plant growth as a virtual plant growth system. Using some of samples on grammar L-System rules for describing of the charaterictics of plant growth, the visualization of structure on plant growth has been resulted and demonstrated.

  1. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  2. Hawaii Solar Integration Study: Solar Modeling Developments and Study Results; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Orwig, K.; Corbus, D.; Piwko, R.; Schuerger, M.; Matsuura, M.; Roose, L.

    2012-12-01

    The Hawaii Solar Integration Study (HSIS) is a follow-up to the Oahu Wind Integration and Transmission Study completed in 2010. HSIS focuses on the impacts of higher penetrations of solar energy on the electrical grid and on other generation. HSIS goes beyond the island of Oahu and investigates Maui as well. The study examines reserve strategies, impacts on thermal unit commitment and dispatch, utilization of energy storage, renewable energy curtailment, and other aspects of grid reliability and operation. For the study, high-frequency (2-second) solar power profiles were generated using a new combined Numerical Weather Prediction model/ stochastic-kinematic cloud model approach, which represents the 'sharp-edge' effects of clouds passing over solar facilities. As part of the validation process, the solar data was evaluated using a variety of analysis techniques including wavelets, power spectral densities, ramp distributions, extreme values, and cross correlations. This paper provides an overview of the study objectives, results of the solar profile validation, and study results.

  3. Alternative policy impacts on US GHG emissions and energy security: A hybrid modeling approach

    International Nuclear Information System (INIS)

    Sarica, Kemal; Tyner, Wallace E.

    2013-01-01

    This study addresses the possible impacts of energy and climate policies, namely corporate average fleet efficiency (CAFE) standard, renewable fuel standard (RFS) and clean energy standard (CES), and an economy wide equivalent carbon tax on GHG emissions in the US to the year 2045. Bottom–up and top–down modeling approaches find widespread use in energy economic modeling and policy analysis, in which they differ mainly with respect to the emphasis placed on technology of the energy system and/or the comprehensiveness of endogenous market adjustments. For this study, we use a hybrid energy modeling approach, MARKAL–Macro, that combines the characteristics of two divergent approaches, in order to investigate and quantify the cost of climate policies for the US and an equivalent carbon tax. The approach incorporates Macro-economic feedbacks through a single sector neoclassical growth model while maintaining sectoral and technological detail of the bottom–up optimization framework with endogenous aggregated energy demand. Our analysis is done for two important objectives of the US energy policy: GHG reduction and increased energy security. Our results suggest that the emission tax achieves results quite similar to the CES policy but very different results in the transportation sector. The CAFE standard and RFS are more expensive than a carbon tax for emission reductions. However, the CAFE standard and RFS are much more efficient at achieving crude oil import reductions. The GDP losses are 2.0% and 1.2% relative to the base case for the policy case and carbon tax. That difference may be perceived as being small given the increased energy security gained from the CAFE and RFS policy measures and the uncertainty inherent in this type of analysis. - Highlights: • Evaluates US impacts of three energy/climate policies and a carbon tax (CT) • Analysis done with bottom–up MARKAL model coupled with a macro model • Electricity clean energy standard very close to

  4. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    Science.gov (United States)

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  5. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  6. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  7. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  8. Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

    KAUST Repository

    Dreano, Denis

    2017-01-01

    concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based

  9. A model-based approach to predict muscle synergies using optimization: application to feedback control

    Directory of Open Access Journals (Sweden)

    Reza eSharif Razavian

    2015-10-01

    Full Text Available This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e. they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems. This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  10. A model-based approach to predict muscle synergies using optimization: application to feedback control.

    Science.gov (United States)

    Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John

    2015-01-01

    This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  11. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  12. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  13. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  14. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  15. Anomalous superconductivity in the tJ model; moment approach

    DEFF Research Database (Denmark)

    Sørensen, Mads Peter; Rodriguez-Nunez, J.J.

    1997-01-01

    By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...

  16. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Anoba, R.C.

    2005-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  17. A classical Master equation approach to modeling an artificial protein motor

    International Nuclear Information System (INIS)

    Kuwada, Nathan J.; Blab, Gerhard A.; Linke, Heiner

    2010-01-01

    Inspired by biomolecular motors, as well as by theoretical concepts for chemically driven nanomotors, there is significant interest in constructing artificial molecular motors. One driving force is the opportunity to create well-controlled model systems that are simple enough to be modeled in detail. A remaining challenge is the fact that such models need to take into account processes on many different time scales. Here we describe use of a classical Master equation approach, integrated with input from Langevin and molecular dynamics modeling, to stochastically model an existing artificial molecular motor concept, the Tumbleweed, across many time scales. This enables us to study how interdependencies between motor processes, such as center-of-mass diffusion and track binding/unbinding, affect motor performance. Results from our model help guide the experimental realization of the proposed motor, and potentially lead to insights that apply to a wider class of molecular motors.

  18. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  20. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  1. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  2. The Subject-Modeling Approach to Developing the Methodol- ogy Competence of the Future Teachers-Psychologists

    Directory of Open Access Journals (Sweden)

    S. A. Gilmanov

    2012-01-01

    Full Text Available Any practical activity of a modern specialist requires the methodology knowledge – the ability of setting and transforming goals, operating with summarized approaches and plans, creating and reconstructing them, etc. The paper deals with the role of the methodology competence in professional thinking; the subject-modeling approach to developing the professional thinking and methodology competence is considered regarding the future teachers-psychologists.The substantiation of the above approach is given concerning the content and structure of the competence in question, as well as the students’ personal characteristics. The psychological mechanism of gaining the experience and professional thinking ability is described as interaction of two models – the cognitive and dynamic emotional ones – both reflecting the professional activity. The results of experimental work, based on such methods as analysis, theoretical modeling, supervision and polling, demon- strated the main developing factors of methodology competence: the general culture level, interest to the future profession, the professional activity presentation in educational process. The research results imply the conclusion that the purposeful pedagogic activity based on the methodology competence is necessary for developing the future specialists’ professional thinking. 

  3. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  4. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  5. Integrating UML, the Q-model and a Multi-Agent Approach in Process Specifications and Behavioural Models of Organisations

    Directory of Open Access Journals (Sweden)

    Raul Savimaa

    2005-08-01

    Full Text Available Efficient estimation and representation of an organisation's behaviour requires specification of business processes and modelling of actors' behaviour. Therefore the existing classical approaches that concentrate only on planned processes are not suitable and an approach that integrates process specifications with behavioural models of actors should be used instead. The present research indicates that a suitable approach should be based on interactive computing. This paper examines the integration of UML diagrams for process specifications, the Q-model specifications for modelling timing criteria of existing and planned processes and a multi-agent approach for simulating non-deterministic behaviour of human actors in an organisation. The corresponding original methodology is introduced and some of its applications as case studies are reviewed.

  6. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration

    DEFF Research Database (Denmark)

    De Sanctis, G.; Fischer, K.; Kohler, J.

    2014-01-01

    Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...

  7. A model-guided symbolic execution approach for network protocol implementations and vulnerability detection.

    Science.gov (United States)

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.

  8. Advanced Pre-clinical Research Approaches and Models to Studying Pediatric Anesthetic Neurotoxicity

    Directory of Open Access Journals (Sweden)

    Cheng eWang

    2012-10-01

    Full Text Available Advances in pediatric and obstetric surgery have resulted in an increase in the duration and complexity of anesthetic procedures. A great deal of concern has recently arisen regarding the safety of anesthesia in infants and children. Because of obvious limitations, it is not possible to thoroughly explore the effects of anesthetic agents on neurons in vivo in human infants or children. However, the availability of some advanced pre-clinical research approaches and models, such as imaging technology both in vitro and in vivo, stem cell and nonhuman primate experimental models, have provided potentially invaluable tools for examining the developmental effects of anesthetic agents. This review discusses the potential application of some sophisticaled research approaches, e.g., calcium imaging, in stem cell-derived in vitro models, especially human embryonic neural stem cells, along with their capacity for proliferation and their potential for differentiation, to dissect relevant mechanisms underlying the etiology of the neurotoxicity associated with developmental exposures to anesthetic agents. Also, this review attempts to discuss several advantages for using the developing rhesus monkey models (in vivo, when combined with dynamic molecular imaging approaches, in addressing critical issues related to the topic of pediatric sedation/anesthesia. These include the relationships between anesthetic-induced neurotoxicity, dose response, time-course and developmental stage at time of exposure (in vivo studies, serving to provide the most expeditious platform toward decreasing the uncertainty in extrapolating pre-clinical data to the human condition.

  9. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  11. Nested 1D-2D approach for urban surface flood modeling

    Science.gov (United States)

    Murla, Damian; Willems, Patrick

    2015-04-01

    interactions within the 1D sewer network. Other areas that recorded flooding outside the main streets have been also included with the second mesh resolution for an accurate determination of flood maps (12.5m2 - 50m2). Permeable areas have been identified and used as infiltration zones using the Horton infiltration model. A mesh sensitivity analysis has been performed for the low flood risk areas for a proper model optimization. As outcome of that analysis, the third mesh resolution has been chosen (75m2 - 300m2). Performance tests have been applied for several synthetic design storms as well as historical storm events displaying satisfactory results upon comparing the flood mapping outcomes produced by the different approaches. Accounting for the infiltration in the green city spaces reduces the flood extents in the range 39% - 68%, while the average reduction in flood volume equals 86%. Acknowledgement: Funding for this research was provided by the Interreg IVB NWE programme (project RainGain) and the Belgian Science Policy Office (project PLURISK). The high resolution topographical information data were obtained from the geographical information service AGIV; the original full hydrodynamic sewer network model from the service company Farys, and the InfoWorks licence from Innovyze.

  12. A Multi-State Physics Modeling approach for the reliability assessment of Nuclear Power Plants piping systems

    International Nuclear Information System (INIS)

    Di Maio, Francesco; Colli, Davide; Zio, Enrico; Tao, Liu; Tong, Jiejuan

    2015-01-01

    Highlights: • We model piping systems degradation of Nuclear Power Plants under uncertainty. • We use Multi-State Physics Modeling (MSPM) to describe a continuous degradation process. • We propose a Monte Carlo (MC) method for calculating time-dependent transition rates. • We apply MSPM to a piping system undergoing thermal fatigue. - Abstract: A Multi-State Physics Modeling (MSPM) approach is here proposed for degradation modeling and failure probability quantification of Nuclear Power Plants (NPPs) piping systems. This approach integrates multi-state modeling to describe the degradation process by transitions among discrete states (e.g., no damage, micro-crack, flaw, rupture, etc.), with physics modeling by (physic) equations to describe the continuous degradation process within the states. We propose a Monte Carlo (MC) simulation method for the evaluation of the time-dependent transition rates between the states of the MSPM. Accountancy is given for the uncertainty in the parameters and external factors influencing the degradation process. The proposed modeling approach is applied to a benchmark problem of a piping system of a Pressurized Water Reactor (PWR) undergoing thermal fatigue. The results are compared with those obtained by a continuous-time homogeneous Markov Chain Model

  13. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  14. A comprehensive dynamic modeling approach for giant magnetostrictive material actuators

    International Nuclear Information System (INIS)

    Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi

    2013-01-01

    In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)

  15. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  16. The influence of mathematics learning using SAVI approach on junior high school students’ mathematical modelling ability

    Science.gov (United States)

    Khusna, H.; Heryaningsih, N. Y.

    2018-01-01

    The aim of this research was to examine mathematical modeling ability who learn mathematics by using SAVI approach. This research was a quasi-experimental research with non-equivalent control group designed by using purposive sampling technique. The population of this research was the state junior high school students in Lembang while the sample consisted of two class at 8th grade. The instrument used in this research was mathematical modeling ability. Data analysis of this research was conducted by using SPSS 20 by Windows. The result showed that students’ ability of mathematical modeling who learn mathematics by using SAVI approach was better than students’ ability of mathematical modeling who learn mathematics using conventional learning.

  17. A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach

    Directory of Open Access Journals (Sweden)

    Alireza TaheriMoghadam

    2014-05-01

    Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.

  18. Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior

    Science.gov (United States)

    Lynch, Annette; Fleming, Wm. Michael

    2005-01-01

    Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…

  19. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  20. Approach and development strategy for an agent-based model of economic confidence.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.; Pryor, Richard J.; Jorgensen, Craig Reed

    2004-08-01

    We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.

  1. Developing a model for effective leadership in healthcare: a concept mapping approach

    Science.gov (United States)

    Hargett, Charles William; Doty, Joseph P; Hauck, Jennifer N; Webb, Allison MB; Cook, Steven H; Tsipis, Nicholas E; Neumann, Julie A; Andolsek, Kathryn M; Taylor, Dean C

    2017-01-01

    Purpose Despite increasing awareness of the importance of leadership in healthcare, our understanding of the competencies of effective leadership remains limited. We used a concept mapping approach (a blend of qualitative and quantitative analysis of group processes to produce a visual composite of the group’s ideas) to identify stakeholders’ mental model of effective healthcare leadership, clarifying the underlying structure and importance of leadership competencies. Methods Literature review, focus groups, and consensus meetings were used to derive a representative set of healthcare leadership competency statements. Study participants subsequently sorted and rank-ordered these statements based on their perceived importance in contributing to effective healthcare leadership in real-world settings. Hierarchical cluster analysis of individual sortings was used to develop a coherent model of effective leadership in healthcare. Results A diverse group of 92 faculty and trainees individually rank-sorted 33 leadership competency statements. The highest rated statements were “Acting with Personal Integrity”, “Communicating Effectively”, “Acting with Professional Ethical Values”, “Pursuing Excellence”, “Building and Maintaining Relationships”, and “Thinking Critically”. Combining the results from hierarchical cluster analysis with our qualitative data led to a healthcare leadership model based on the core principle of Patient Centeredness and the core competencies of Integrity, Teamwork, Critical Thinking, Emotional Intelligence, and Selfless Service. Conclusion Using a mixed qualitative-quantitative approach, we developed a graphical representation of a shared leadership model derived in the healthcare setting. This model may enhance learning, teaching, and patient care in this important area, as well as guide future research. PMID:29355249

  2. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  3. Repetitive Identification of Structural Systems Using a Nonlinear Model Parameter Refinement Approach

    Directory of Open Access Journals (Sweden)

    Jeng-Wen Lin

    2009-01-01

    Full Text Available This paper proposes a statistical confidence interval based nonlinear model parameter refinement approach for the health monitoring of structural systems subjected to seismic excitations. The developed model refinement approach uses the 95% confidence interval of the estimated structural parameters to determine their statistical significance in a least-squares regression setting. When the parameters' confidence interval covers the zero value, it is statistically sustainable to truncate such parameters. The remaining parameters will repetitively undergo such parameter sifting process for model refinement until all the parameters' statistical significance cannot be further improved. This newly developed model refinement approach is implemented for the series models of multivariable polynomial expansions: the linear, the Taylor series, and the power series model, leading to a more accurate identification as well as a more controllable design for system vibration control. Because the statistical regression based model refinement approach is intrinsically used to process a “batch” of data and obtain an ensemble average estimation such as the structural stiffness, the Kalman filter and one of its extended versions is introduced to the refined power series model for structural health monitoring.

  4. An agent-based approach to model land-use change at a regional scale

    NARCIS (Netherlands)

    Valbuena, D.F.; Verburg, P.H.; Bregt, A.K.; Ligtenberg, A.

    2010-01-01

    Land-use/cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. A common approach to analyse and simulate LUCC as the result of individual decisions is agent-based modelling (ABM). However, ABM is often applied to simulate processes at local

  5. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  6. An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.

    Directory of Open Access Journals (Sweden)

    Rowena Syn Yin Wong

    Full Text Available There are not many studies that attempt to model intensive care unit (ICU risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU.This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV model. Bayesian Markov Chain Monte Carlo (MCMC simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method.The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05 for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study.Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.

  7. Towards a Semantic E-Learning Theory by Using a Modelling Approach

    Science.gov (United States)

    Yli-Luoma, Pertti V. J.; Naeve, Ambjorn

    2006-01-01

    In the present study, a semantic perspective on e-learning theory is advanced and a modelling approach is used. This modelling approach towards the new learning theory is based on the four SECI phases of knowledge conversion: Socialisation, Externalisation, Combination and Internalisation, introduced by Nonaka in 1994, and involving two levels of…

  8. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  9. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  10. Genetic programming based models in plant tissue culture: An addendum to traditional statistical approach.

    Science.gov (United States)

    Mridula, Meenu R; Nair, Ashalatha S; Kumar, K Satheesh

    2018-02-01

    In this paper, we compared the efficacy of observation based modeling approach using a genetic algorithm with the regular statistical analysis as an alternative methodology in plant research. Preliminary experimental data on in vitro rooting was taken for this study with an aim to understand the effect of charcoal and naphthalene acetic acid (NAA) on successful rooting and also to optimize the two variables for maximum result. Observation-based modelling, as well as traditional approach, could identify NAA as a critical factor in rooting of the plantlets under the experimental conditions employed. Symbolic regression analysis using the software deployed here optimised the treatments studied and was successful in identifying the complex non-linear interaction among the variables, with minimalistic preliminary data. The presence of charcoal in the culture medium has a significant impact on root generation by reducing basal callus mass formation. Such an approach is advantageous for establishing in vitro culture protocols as these models will have significant potential for saving time and expenditure in plant tissue culture laboratories, and it further reduces the need for specialised background.

  11. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology.......The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  12. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  13. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  14. Global GPS Ionospheric Modelling Using Spherical Harmonic Expansion Approach

    Directory of Open Access Journals (Sweden)

    Byung-Kyu Choi

    2010-12-01

    Full Text Available In this study, we developed a global ionosphere model based on measurements from a worldwide network of global positioning system (GPS. The total number of the international GPS reference stations for development of ionospheric model is about 100 and the spherical harmonic expansion approach as a mathematical method was used. In order to produce the ionospheric total electron content (TEC based on grid form, we defined spatial resolution of 2.0 degree and 5.0 degree in latitude and longitude, respectively. Two-dimensional TEC maps were constructed within the interval of one hour, and have a high temporal resolution compared to global ionosphere maps which are produced by several analysis centers. As a result, we could detect the sudden increase of TEC by processing GPS observables on 29 October, 2003 when the massive solar flare took place.

  15. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  16. Predicting the emission from an incineration plant - a modelling approach

    International Nuclear Information System (INIS)

    Rohyiza Baan

    2004-01-01

    The emissions from combustion process of Municipal Solid Waste (MSW) have become an important issue in incineration technology. Resulting from unstable combustion conditions, the formation of undesirable compounds such as CO, SO 2 , NO x , PM 10 and dioxin become the source of pollution concentration in the atmosphere. The impact of emissions on criteria air pollutant concentrations could be obtained directly using ambient air monitoring equipment or predicted using dispersion modelling. Literature shows that the complicated atmospheric processes that occur in nature can be described using mathematical models. This paper will highlight the air dispersion model as a tool to relate and simulate the release and dispersion of air pollutants in the atmosphere. The technique is based on a programming approach to develop the air dispersion ground level concentration model with the use of Gaussian and Pasquil equation. This model is useful to study the consequences of various sources of air pollutant and estimating the amount of pollutants released into the air from existing emission sources. From this model, it was found that the difference in percentage of data between actual conditions and the model's prediction is about 5%. (Author)

  17. ECOMOD - An ecological approach to radioecological modelling

    International Nuclear Information System (INIS)

    Sazykina, Tatiana G.

    2000-01-01

    A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations

  18. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  19. Assessment of Energy Removal Impacts on Physical Systems: Hydrodynamic Model Domain Expansion and Refinement, and Online Dissemination of Model Results

    International Nuclear Information System (INIS)

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2010-01-01

    In this report we describe the (1) the expansion of the PNNL hydrodynamic model domain to include the continental shelf along the coasts of Washington, Oregon, and Vancouver Island; and (2) the approach and progress in developing the online/Internet disseminations of model results and outreach efforts in support of the Puget Sound Operational Forecast System (PS-OPF). Submittal of this report completes the work on Task 2.1.2, Effects of Physical Systems, Subtask 2.1.2.1, Hydrodynamics, for fiscal year 2010 of the Environmental Effects of Marine and Hydrokinetic Energy project.

  20. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    Energy Technology Data Exchange (ETDEWEB)

    Sari, Salih [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Erguen, Sule [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey)

    2009-03-15

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data.

  1. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    International Nuclear Information System (INIS)

    Sari, Salih; Erguen, Sule; Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi

    2009-01-01

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data

  2. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    Science.gov (United States)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  3. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    Science.gov (United States)

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  4. Modeling and analysis of a decentralized electricity market: An integrated simulation/optimization approach

    International Nuclear Information System (INIS)

    Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan

    2012-01-01

    In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.

  5. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  6. Validation of a LES turbulence modeling approach on a steady engine head flow

    NARCIS (Netherlands)

    Huijnen, V.; Somers, L.M.T.; Baert, R.S.G.; Goey, de L.P.H.; Dias, V.

    2005-01-01

    The application of the LES turbulence modeling approach in the Kiva-environment is validated on a complex geometry. Results for the steady flow in a realistic geometry of a production type heavy-duty diesel engine head with 120 mm cylinder bore are presented. The bulk Reynolds number is Reb = 1 fl

  7. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Daude, F. [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); IMSIA, UMR EDF-CNRS-CEA-ENSTA 9219, Université Paris Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau (France); Koren, B.; Tijsseling, A.S. [Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2017-02-15

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splitting approach. The results are in good agreement with reference results and exact solutions.

  8. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  9. A survey on computational intelligence approaches for predictive modeling in prostate cancer

    OpenAIRE

    Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG

    2017-01-01

    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...

  10. A case for multi-model and multi-approach based event attribution: The 2015 European drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Seneviratne, Sonia Isabelle

    2017-04-01

    Science on the role of anthropogenic influence on extreme weather events such as heat waves or droughts has evolved rapidly over the past years. The approach of "event attribution" compares the occurrence probability of an event in the present, factual world with the probability of the same event in a hypothetical, counterfactual world without human-induced climate change. Every such analysis necessarily faces multiple methodological choices including, but not limited to: the event definition, climate model configuration, and the design of the counterfactual world. Here, we explore the role of such choices for an attribution analysis of the 2015 European summer drought (Hauser et al., in preparation). While some GCMs suggest that anthropogenic forcing made the 2015 drought more likely, others suggest no impact, or even a decrease in the event probability. These results additionally differ for single GCMs, depending on the reference used for the counterfactual world. Observational results do not suggest a historical tendency towards more drying, but the record may be too short to provide robust assessments because of the large interannual variability of drought occurrence. These results highlight the need for a multi-model and multi-approach framework in event attribution research. This is especially important for events with low signal to noise ratio and high model dependency such as regional droughts. Hauser, M., L. Gudmundsson, R. Orth, A. Jézéquel, K. Haustein, S.I. Seneviratne, in preparation. A case for multi-model and multi-approach based event attribution: The 2015 European drought.

  11. Analytic Model Predictive Control of Uncertain Nonlinear Systems: A Fuzzy Adaptive Approach

    Directory of Open Access Journals (Sweden)

    Xiuyan Peng

    2015-01-01

    Full Text Available A fuzzy adaptive analytic model predictive control method is proposed in this paper for a class of uncertain nonlinear systems. Specifically, invoking the standard results from the Moore-Penrose inverse of matrix, the unmatched problem which exists commonly in input and output dimensions of systems is firstly solved. Then, recurring to analytic model predictive control law, combined with fuzzy adaptive approach, the fuzzy adaptive predictive controller synthesis for the underlying systems is developed. To further reduce the impact of fuzzy approximation error on the system and improve the robustness of the system, the robust compensation term is introduced. It is shown that by applying the fuzzy adaptive analytic model predictive controller the rudder roll stabilization system is ultimately uniformly bounded stabilized in the H-infinity sense. Finally, simulation results demonstrate the effectiveness of the proposed method.

  12. A modelling approach for improved implementation of information technology in manufacturing systems

    DEFF Research Database (Denmark)

    Larsen, Michael Holm; Langer, Gilad; Kirkby, Lars Phillip

    2000-01-01

    concept into practice. The paper demonstrates the use of the approach in a practical case, which involves modelling of the shop floor activities and control system at the aluminium parts production at a Danish manufacturer of state-of-the-art audio-video equipment and telephones.......The paper presents a modelling approach, which is based on the multiple view perspective of Soft Systems Methodology and an encapsulation of these perspectives into an object orientated model. The approach provides a structured procedure for putting theoretical abstractions of a new production...

  13. A model-guided symbolic execution approach for network protocol implementations and vulnerability detection.

    Directory of Open Access Journals (Sweden)

    Shameng Wen

    Full Text Available Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.

  14. Prediction of the flooding process at the Ronneburg site - results of an integrated approach

    International Nuclear Information System (INIS)

    Paul, M.; Saenger, H.-J.; Snagowski, S.; Maerten, H.; Eckart, M.

    1998-01-01

    The flooding process of the Ronneburg uranium mine (WISMUT) was initiated at the turn of the year 1997 to 1998. In order to prepare the flooding process and to derive and optimize technological measures an integrated modelling approach was chosen which includes several coupled modules. The most important issues to be answered are: (1) prediction of the flooding time (2) prediction of the groundwater level at the post-flooding stage, assessment of amount, location and quality of flooding waters entering the receiving streams at the final stage (3) water quality prediction within the mine during the flooding process (4) definition of technological measures and assessment of their efficiency A box model which includes the three-dimensional distribution of the cavity volume in the mine represents the model core. The model considers the various types of dewatered cavity volumes for each mine level / mining field and the degree of vertical and horizontal connection between the mining fields. Different types of open mine space as well as the dewatered geological pore and joint volume are considered taking into account the contour of the depression cone prior to flooding and the characteristics of the different rock types. Based on the mine water balance and the flooding technology the model predicts the rise of the water table over time during the flooding process for each mine field separately. In order to predict the mine water quality and the efficiency of in-situ water treatment the box model was linked to a geochemical model (PHREEQC). A three-dimensional flow model is used to evaluate the post-flooding situation at the Ronneburg site. This model is coupled to the box model. The modelling results of various flooding scenarios show that a prediction of the post-flooding geohydraulic situation is possible despite of uncertainties concerning the input parameters which still exist. The post-flooding water table in the central part of the Ronneburg mine will be 270 m

  15. Multimethod, multistate Bayesian hierarchical modeling approach for use in regional monitoring of wolves.

    Science.gov (United States)

    Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente

    2016-08-01

    In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population

  16. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  17. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  18. A generalized approach for historical mock-up acquisition and data modelling: Towards historically enriched 3D city models

    Science.gov (United States)

    Hervy, B.; Billen, R.; Laroche, F.; Carré, C.; Servières, M.; Van Ruymbeke, M.; Tourre, V.; Delfosse, V.; Kerouanton, J.-L.

    2012-10-01

    Museums are filled with hidden secrets. One of those secrets lies behind historical mock-ups whose signification goes far behind a simple representation of a city. We face the challenge of designing, storing and showing knowledge related to these mock-ups in order to explain their historical value. Over the last few years, several mock-up digitalisation projects have been realised. Two of them, Nantes 1900 and Virtual Leodium, propose innovative approaches that present a lot of similarities. This paper presents a framework to go one step further by analysing their data modelling processes and extracting what could be a generalized approach to build a numerical mock-up and the knowledge database associated. Geometry modelling and knowledge modelling influence each other and are conducted in a parallel process. Our generalized approach describes a global overview of what can be a data modelling process. Our next goal is obviously to apply this global approach on other historical mock-up, but we also think about applying it to other 3D objects that need to embed semantic data, and approaching historically enriched 3D city models.

  19. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  20. Polaron mobility obtained by a variational approach for lattice Fröhlich models

    Science.gov (United States)

    Kornjača, Milan; Vukmirović, Nenad

    2018-04-01

    Charge carrier mobility for a class of lattice models with long-range electron-phonon interaction was investigated. The approach for mobility calculation is based on a suitably chosen unitary transformation of the model Hamiltonian which transforms it into the form where the remaining interaction part can be treated as a perturbation. Relevant spectral functions were then obtained using Matsubara Green's functions technique and charge carrier mobility was evaluated using Kubo's linear response formula. Numerical results were presented for a wide range of electron-phonon interaction strengths and temperatures in the case of one-dimensional version of the model. The results indicate that the mobility decreases with increasing temperature for all electron-phonon interaction strengths in the investigated range, while longer interaction range leads to more mobile carriers.

  1. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    Science.gov (United States)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  2. A distributed delay approach for modeling delayed outcomes in pharmacokinetics and pharmacodynamics studies.

    Science.gov (United States)

    Hu, Shuhua; Dunlavey, Michael; Guzy, Serge; Teuscher, Nathan

    2018-04-01

    A distributed delay approach was proposed in this paper to model delayed outcomes in pharmacokinetics and pharmacodynamics studies. This approach was shown to be general enough to incorporate a wide array of pharmacokinetic and pharmacodynamic models as special cases including transit compartment models, effect compartment models, typical absorption models (either zero-order or first-order absorption), and a number of atypical (or irregular) absorption models (e.g., parallel first-order, mixed first-order and zero-order, inverse Gaussian, and Weibull absorption models). Real-life examples were given to demonstrate how to implement distributed delays in Phoenix ® NLME™ 8.0, and to numerically show the advantages of the distributed delay approach over the traditional methods.

  3. Identifying the optimal supply temperature in district heating networks - A modelling approach

    DEFF Research Database (Denmark)

    Mohammadi, Soma; Bojesen, Carsten

    2014-01-01

    of this study is to develop a model for thermo-hydraulic calculation of low temperature DH system. The modelling is performed with emphasis on transient heat transfer in pipe networks. The pseudo-dynamic approach is adopted to model the District Heating Network [DHN] behaviour which estimates the temperature...... dynamically while the flow and pressure are calculated on the basis of steady state conditions. The implicit finite element method is applied to simulate the transient temperature behaviour in the network. Pipe network heat losses, pressure drop in the network and return temperature to the plant...... are calculated in the developed model. The model will serve eventually as a basis to find out the optimal supply temperature in an existing DHN in later work. The modelling results are used as decision support for existing DHN; proposing possible modifications to operate at optimal supply temperature....

  4. Optimal inverse magnetorheological damper modeling using shuffled frog-leaping algorithm–based adaptive neuro-fuzzy inference system approach

    Directory of Open Access Journals (Sweden)

    Xiufang Lin

    2016-08-01

    Full Text Available Magnetorheological dampers have become prominent semi-active control devices for vibration mitigation of structures which are subjected to severe loads. However, the damping force cannot be controlled directly due to the inherent nonlinear characteristics of the magnetorheological dampers. Therefore, for fully exploiting the capabilities of the magnetorheological dampers, one of the challenging aspects is to develop an accurate inverse model which can appropriately predict the input voltage to control the damping force. In this article, a hybrid modeling strategy combining shuffled frog-leaping algorithm and adaptive-network-based fuzzy inference system is proposed to model the inverse dynamic characteristics of the magnetorheological dampers for improving the modeling accuracy. The shuffled frog-leaping algorithm is employed to optimize the premise parameters of the adaptive-network-based fuzzy inference system while the consequent parameters are tuned by a least square estimation method, here known as shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach. To evaluate the effectiveness of the proposed approach, the inverse modeling results based on the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach are compared with those based on the adaptive-network-based fuzzy inference system and genetic algorithm–based adaptive-network-based fuzzy inference system approaches. Analysis of variance test is carried out to statistically compare the performance of the proposed methods and the results demonstrate that the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system strategy outperforms the other two methods in terms of modeling (training accuracy and checking accuracy.

  5. A novel approach for modelling complex maintenance systems using discrete event simulation

    International Nuclear Information System (INIS)

    Alrabghi, Abdullah; Tiwari, Ashutosh

    2016-01-01

    Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.

  6. A variational approach to chiral quark models

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.

    1987-01-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)

  7. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  8. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  9. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  11. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  12. A realistic approach to modeling an in-duct desulfurization process based on an experimental pilot plant study

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz, F.J.G.; Ollero, P. [University of Seville, Seville (Spain)

    2008-07-15

    This paper has been written to provide a realistic approach to modeling an in-duct desulfurization process and because of the disagreement between the results predicted by published kinetic models of the reaction between hydrated lime and SO{sub 2} at low temperature and the experimental results obtained in pilot plants where this process takes place. Results were obtained from an experimental program carried out in a 3-MWe pilot plant. Additionally, five kinetic models, from the literature, of the reaction of sulfation of Ca(OH){sub 2} at low temperatures were assessed by simulation and indicate that the desulfurization efficiencies predicted by them are clearly lower than those experimentally obtained in our own pilot plant as well as others. Next, a general model was fitted by minimizing the difference between the calculated and the experimental results from the pilot plant, using Matlab{sup TM}. The parameters were reduced as much as possible, to only two. Finally, after implementing this model in a simulation tool of the in-duct sorbent injection process, it was validated and it was shown to yield a realistic approach useful for both analyzing results and aiding in the design of an in-duct desulfurization process.

  13. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  14. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  15. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  16. The construction of geological model using an iterative approach (Step 1 and Step 2)

    International Nuclear Information System (INIS)

    Matsuoka, Toshiyuki; Kumazaki, Naoki; Saegusa, Hiromitsu; Sasaki, Keiichi; Endo, Yoshinobu; Amano, Kenji

    2005-03-01

    One of the main goals of the Mizunami Underground Research Laboratory (MIU) Project is to establish appropriate methodologies for reliably investigating and assessing the deep subsurface. This report documents the results of geological modeling of Step 1 and Step 2 using the iterative investigation approach at the site-scale (several 100m to several km in area). For the Step 1 model, existing information (e.g. literature), and results from geological mapping and reflection seismic survey were used. For the Step 2 model, additional information obtained from the geological investigation using existing borehole and the shallow borehole investigation were incorporated. As a result of this study, geological elements that should be represented in the model were defined, and several major faults with trends of NNW, EW and NE trend were identified (or inferred) in the vicinity of the MIU-site. (author)

  17. Numerical results for near surface time domain electromagnetic exploration: a full waveform approach

    Science.gov (United States)

    Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.

    2015-12-01

    Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two

  18. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  19. Thin inclusion approach for modelling of heterogeneous conducting materials

    Science.gov (United States)

    Lavrov, Nikolay; Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel

    Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials.

  20. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  1. Initial assessment of a multi-model approach to spring flood forecasting in Sweden

    Science.gov (United States)

    Olsson, J.; Uvo, C. B.; Foster, K.; Yang, W.

    2015-06-01

    Hydropower is a major energy source in Sweden and proper reservoir management prior to the spring flood onset is crucial for optimal production. This requires useful forecasts of the accumulated discharge in the spring flood period (i.e. the spring-flood volume, SFV). Today's SFV forecasts are generated using a model-based climatological ensemble approach, where time series of precipitation and temperature from historical years are used to force a calibrated and initialised set-up of the HBV model. In this study, a number of new approaches to spring flood forecasting, that reflect the latest developments with respect to analysis and modelling on seasonal time scales, are presented and evaluated. Three main approaches, represented by specific methods, are evaluated in SFV hindcasts for three main Swedish rivers over a 10-year period with lead times between 0 and 4 months. In the first approach, historically analogue years with respect to the climate in the period preceding the spring flood are identified and used to compose a reduced ensemble. In the second, seasonal meteorological ensemble forecasts are used to drive the HBV model over the spring flood period. In the third approach, statistical relationships between SFV and the large-sale atmospheric circulation are used to build forecast models. None of the new approaches consistently outperform the climatological ensemble approach, but for specific locations and lead times improvements of 20-30 % are found. When combining all forecasts in a weighted multi-model approach, a mean improvement over all locations and lead times of nearly 10 % was indicated. This demonstrates the potential of the approach and further development and optimisation into an operational system is ongoing.

  2. Model-independent approach for dark matter phenomenology

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  3. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...

  4. Computation of External Quality Factors for RF Structures by Means of Model Order Reduction and a Perturbation Approach

    CERN Document Server

    Flisgen, Thomas; van Rienen, Ursula

    2016-01-01

    External quality factors are significant quantities to describe losses via waveguide ports in radio frequency resonators. The current contribution presents a novel approach to determine external quality factors by means of a two-step procedure: First, a state-space model for the lossless radio frequency structure is generated and its model order is reduced. Subsequently, a perturbation method is applied on the reduced model so that external losses are accounted for. The advantage of this approach results from the fact that the challenges in dealing with lossy systems are shifted to the reduced order model. This significantly saves computational costs. The present paper provides a short overview on existing methods to compute external quality factors. Then, the novel approach is introduced and validated in terms of accuracy and computational time by means of commercial software.

  5. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  6. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  7. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  8. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  9. A new model in teaching undergraduate research: A collaborative approach and learning cooperatives.

    Science.gov (United States)

    O'Neal, Pamela V; McClellan, Lynx Carlton; Jarosinski, Judith M

    2016-05-01

    Forming new, innovative collaborative approaches and cooperative learning methods between universities and hospitals maximize learning for undergraduate nursing students in a research course and provide professional development for nurses on the unit. The purpose of this Collaborative Approach and Learning Cooperatives (CALC) Model is to foster working relations between faculty and hospital administrators, maximize small group learning of undergraduate nursing students, and promote onsite knowledge of evidence based care for unit nurses. A quality improvement study using the CALC Model was implemented in an undergraduate nursing research course at a southern university. Hospital administrators provided a list of clinical concerns based on national performance outcome measures. Undergraduate junior nursing student teams chose a clinical question, gathered evidence from the literature, synthesized results, demonstrated practice application, and developed practice recommendations. The student teams developed posters, which were evaluated by hospital administrators. The administrators selected several posters to display on hospital units for continuing education opportunity. This CALC Model is a systematic, calculated approach and an economically feasible plan to maximize personnel and financial resources to optimize collaboration and cooperative learning. Universities and hospital administrators, nurses, and students benefit from working together and learning from each other. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  11. Making the most of what we have: application of extrapolation approaches in wildlife transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)

    2014-07-01

    Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass

  12. Development of flexible process-centric web applications: An integrated model driven approach

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    In recent years, Model Driven Engineering (MDE) approaches have been proposed and used to develop and evolve WAs. However, the definition of appropriate MDE approaches for the development of flexible process-centric WAs is still limited. In particular, (flexible) workflow models have never been

  13. A Modelling Approach for Improved Implementation of Information Technology in Manufacturing Systems

    DEFF Research Database (Denmark)

    Langer, Gilad; Larsen, Michael Holm; Kirkby, Lars Phillip

    1997-01-01

    The paper presents a modelling approach which is based on the multiple view perspective of Soft Systems Methodology and an encapsulation of these perspectives into an object orientated model. The approach provide a structured procedure for putting theoretical abstractions of a new production conc...

  14. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  15. Mechatronics by bond graphs an object-oriented approach to modelling and simulation

    CERN Document Server

    Damić, Vjekoslav

    2015-01-01

    This book presents a computer-aided approach to the design of mechatronic systems. Its subject is an integrated modeling and simulation in a visual computer environment. Since the first edition, the simulation software changed enormously, became more user-friendly and easier to use. Therefore, a second edition became necessary taking these improvements into account. The modeling is based on system top-down and bottom-up approach. The mathematical models are generated in a form of differential-algebraic equations and solved using numerical and symbolic algebra methods. The integrated approach developed is applied to mechanical, electrical and control systems, multibody dynamics, and continuous systems. .

  16. Modeling flow in fractured medium. Uncertainty analysis with stochastic continuum approach

    International Nuclear Information System (INIS)

    Niemi, A.

    1994-01-01

    For modeling groundwater flow in formation-scale fractured media, no general method exists for scaling the highly heterogeneous hydraulic conductivity data to model parameters. The deterministic approach is limited in representing the heterogeneity of a medium and the application of fracture network models has both conceptual and practical limitations as far as site-scale studies are concerned. The study investigates the applicability of stochastic continuum modeling at the scale of data support. No scaling of the field data is involved, and the original variability is preserved throughout the modeling. Contributions of various aspects to the total uncertainty in the modeling prediction can also be determined with this approach. Data from five crystalline rock sites in Finland are analyzed. (107 refs., 63 figs., 7 tabs.)

  17. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  18. A fuzzy-logic-based approach to qualitative safety modelling for marine systems

    International Nuclear Information System (INIS)

    Sii, H.S.; Ruxton, Tom; Wang Jin

    2001-01-01

    Safety assessment based on conventional tools (e.g. probability risk assessment (PRA)) may not be well suited for dealing with systems having a high level of uncertainty, particularly in the feasibility and concept design stages of a maritime or offshore system. By contrast, a safety model using fuzzy logic approach employing fuzzy IF-THEN rules can model the qualitative aspects of human knowledge and reasoning processes without employing precise quantitative analyses. A fuzzy-logic-based approach may be more appropriately used to carry out risk analysis in the initial design stages. This provides a tool for working directly with the linguistic terms commonly used in carrying out safety assessment. This research focuses on the development and representation of linguistic variables to model risk levels subjectively. These variables are then quantified using fuzzy sets. In this paper, the development of a safety model using fuzzy logic approach for modelling various design variables for maritime and offshore safety based decision making in the concept design stage is presented. An example is used to illustrate the proposed approach

  19. Tornadoes and related damage costs: statistical modeling with a semi-Markov approach

    OpenAIRE

    Corini, Chiara; D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio; Manca, Raimondo

    2015-01-01

    We propose a statistical approach to tornadoes modeling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modeling the tornadoes intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornadoes intensity into six states, it is possible to model the tornadoes intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reprod...

  20. Assessing the polycyclic aromatic hydrocarbon (PAH) pollution of urban stormwater runoff: a dynamic modeling approach.

    Science.gov (United States)

    Zheng, Yi; Lin, Zhongrong; Li, Hao; Ge, Yan; Zhang, Wei; Ye, Youbin; Wang, Xuejun

    2014-05-15

    Urban stormwater runoff delivers a significant amount of polycyclic aromatic hydrocarbons (PAHs), mostly of atmospheric origin, to receiving water bodies. The PAH pollution of urban stormwater runoff poses serious risk to aquatic life and human health, but has been overlooked by environmental modeling and management. This study proposed a dynamic modeling approach for assessing the PAH pollution and its associated environmental risk. A variable time-step model was developed to simulate the continuous cycles of pollutant buildup and washoff. To reflect the complex interaction among different environmental media (i.e. atmosphere, dust and stormwater), the dependence of the pollution level on antecedent weather conditions was investigated and embodied in the model. Long-term simulations of the model can be efficiently performed, and probabilistic features of the pollution level and its risk can be easily determined. The applicability of this approach and its value to environmental management was demonstrated by a case study in Beijing, China. The results showed that Beijing's PAH pollution of road runoff is relatively severe, and its associated risk exhibits notable seasonal variation. The current sweeping practice is effective in mitigating the pollution, but the effectiveness is both weather-dependent and compound-dependent. The proposed modeling approach can help identify critical timing and major pollutants for monitoring, assessing and controlling efforts to be focused on. The approach is extendable to other urban areas, as well as to other contaminants with similar fate and transport as PAHs. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Numerical modelling of diesel spray using the Eulerian multiphase approach

    International Nuclear Information System (INIS)

    Vujanović, Milan; Petranović, Zvonimir; Edelbauer, Wilfried; Baleta, Jakov; Duić, Neven

    2015-01-01

    Highlights: • Numerical model for fuel disintegration was presented. • Fuel liquid and vapour were calculated. • Good agreement with experimental data was shown for various combinations of injection and chamber pressure. - Abstract: This research investigates high pressure diesel fuel injection into the combustion chamber by performing computational simulations using the Euler–Eulerian multiphase approach. Six diesel-like conditions were simulated for which the liquid fuel jet was injected into a pressurised inert environment (100% N 2 ) through a 205 μm nozzle hole. The analysis was focused on the liquid jet and vapour penetration, describing spatial and temporal spray evolution. For this purpose, an Eulerian multiphase model was implemented, variations of the sub-model coefficients were performed, and their impact on the spray formation was investigated. The final set of sub-model coefficients was applied to all operating points. Several simulations of high pressure diesel injections (50, 80, and 120 MPa) combined with different chamber pressures (5.4 and 7.2 MPa) were carried out and results were compared to the experimental data. The predicted results share a similar spray cloud shape for all conditions with the different vapour and liquid penetration length. The liquid penetration is shortened with the increase in chamber pressure, whilst the vapour penetration is more pronounced by elevating the injection pressure. Finally, the results showed good agreement when compared to the measured data, and yielded the correct trends for both the liquid and vapour penetrations under different operating conditions

  2. Profile of Students’ Mental Model Change on Law Concepts Archimedes as Impact of Multi-Representation Approach

    Science.gov (United States)

    Taher, M.; Hamidah, I.; Suwarma, I. R.

    2017-09-01

    This paper outlined the results of an experimental study on the effects of multi-representation approach in learning Archimedes Law on students’ mental model improvement. The multi-representation techniques implemented in the study were verbal, pictorial, mathematical, and graphical representations. Students’ mental model was classified into three levels, i.e. scientific, synthetic, and initial levels, based on the students’ level of understanding. The present study employed the pre-experimental methodology, using one group pretest-posttest design. The subject of the study was 32 eleventh grade students in a Public Senior High School in Riau Province. The research instrument included model mental test on hydrostatic pressure concept, in the form of essay test judged by experts. The findings showed that there was positive change in students’ mental model, indicating that multi-representation approach was effective to improve students’ mental model.

  3. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  4. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    Science.gov (United States)

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of

  5. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  6. Approaches to modeling landscape-scale drought-induced forest mortality

    Science.gov (United States)

    Gustafson, Eric J.; Shinneman, Douglas

    2015-01-01

    Drought stress is an important cause of tree mortality in forests, and drought-induced disturbance events are projected to become more common in the future due to climate change. Landscape Disturbance and Succession Models (LDSM) are becoming widely used to project climate change impacts on forests, including potential interactions with natural and anthropogenic disturbances, and to explore the efficacy of alternative management actions to mitigate negative consequences of global changes on forests and ecosystem services. Recent studies incorporating drought-mortality effects into LDSMs have projected significant potential changes in forest composition and carbon storage, largely due to differential impacts of drought on tree species and interactions with other disturbance agents. In this chapter, we review how drought affects forest ecosystems and the different ways drought effects have been modeled (both spatially and aspatially) in the past. Building on those efforts, we describe several approaches to modeling drought effects in LDSMs, discuss advantages and shortcomings of each, and include two case studies for illustration. The first approach features the use of empirically derived relationships between measures of drought and the loss of tree biomass to drought-induced mortality. The second uses deterministic rules of species mortality for given drought events to project changes in species composition and forest distribution. A third approach is more mechanistic, simulating growth reductions and death caused by water stress. Because modeling of drought effects in LDSMs is still in its infancy, and because drought is expected to play an increasingly important role in forest health, further development of modeling drought-forest dynamics is urgently needed.

  7. Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations

    Energy Technology Data Exchange (ETDEWEB)

    Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-11-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  8. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  9. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

    NARCIS (Netherlands)

    Jongschaap, R.J.J.

    2001-01-01

    One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

  10. Numerical modelling in building thermo-aeraulics: from CFD modelling to an hybrid finite volume / zonal approach; Modelisation numerique de la thermoaeraulique du batiment: des modeles CFD a une approche hybride volumes finis / zonale

    Energy Technology Data Exchange (ETDEWEB)

    Bellivier, A.

    2004-05-15

    For 3D modelling of thermo-aeraulics in building using field codes, it is necessary to reduce the computing time in order to model increasingly larger volumes. The solution suggested in this study is to couple two modelling: a zonal approach and a CFD approach. The first part of the work that was carried out is the setting of a simplified CFD modelling. We propose rules for use of coarse grids, a constant effective viscosity law and adapted coefficients for heat exchange in the framework of building thermo-aeraulics. The second part of this work concerns the creation of fluid Macro-Elements and their coupling with a calculation of CFD finite volume type. Depending on the boundary conditions of the problem, a local description of the driving flow is proposed via the installation and use of semi-empirical evolution laws. The Macro-Elements is then inserted in CFD computation: the values of velocity calculated by the evolution laws are imposed on the CFD cells corresponding to the Macro-Element. We use these two approaches on five cases representative of thermo-aeraulics in buildings. The results are compared with experimental data and with traditional RANS simulations. We highlight the significant gain of time that our approach allows while preserving a good quality of numerical results. (author)

  11. Behavior-based network management: a unique model-based approach to implementing cyber superiority

    Science.gov (United States)

    Seng, Jocelyn M.

    2016-05-01

    Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.

  12. Towards a model-based development approach for wireless sensor-actuator network protocols

    DEFF Research Database (Denmark)

    Kumar S., A. Ajith; Simonsen, Kent Inge

    2014-01-01

    Model-Driven Software Engineering (MDSE) is a promising approach for the development of applications, and has been well adopted in the embedded applications domain in recent years. Wireless Sensor Actuator Networks consisting of resource constrained hardware and platformspecific operating system...... induced due to manual translations. With the use of formal semantics in the modeling approach, we can further ensure the correctness of the source model by means of verification. Also, with the use of network simulators and formal modeling tools, we obtain a verified and validated model to be used...

  13. Modeling of Coastal Effluent Transport: an Approach to Linking of Far-field and Near-field Models

    International Nuclear Information System (INIS)

    Yang, Zhaoqing; Khangaonkar, Tarang P.

    2008-01-01

    One of the challenges in effluent transport modeling in coastal tidal environments is the proper calculation of initial dilution in connection with the far-field transport model. In this study, an approach of external linkage of far-field and near-field effluent transport models is presented, and applied to simulate the effluent transport in the Port Angeles Harbor, Washington in the Strait of Juan de Fuca. A near-field plume model was used to calculate the effluent initial dilution and a three-dimensional (3-D) hydrodynamic model was developed to simulate the tidal circulation and far-field effluent transport in the Port Angeles Harbor. In the present study, the hydrodynamic model was driven by tides and surface winds. Observed water surface elevation and velocity data were used to calibrate the model over a period covering the neap-spring tidal cycle. The model was also validated with observed surface drogue trajectory data. The model successfully reproduced the tidal dynamics in the study area and good agreements between model results and observed data were obtained. This study demonstrated that the linkage between the near-field and far-field models in effluent transport modeling can be achieved through iteratively adjusting the model grid sizes such that the far-field modeled dilution ratio and effluent concentration in the effluent discharge model grid cell match the concentration calculated by the near-field plume model

  14. Transport modeling: An artificial immune system approach

    Directory of Open Access Journals (Sweden)

    Teodorović Dušan

    2006-01-01

    Full Text Available This paper describes an artificial immune system approach (AIS to modeling time-dependent (dynamic, real time transportation phenomenon characterized by uncertainty. The basic idea behind this research is to develop the Artificial Immune System, which generates a set of antibodies (decisions, control actions that altogether can successfully cover a wide range of potential situations. The proposed artificial immune system develops antibodies (the best control strategies for different antigens (different traffic "scenarios". This task is performed using some of the optimization or heuristics techniques. Then a set of antibodies is combined to create Artificial Immune System. The developed Artificial Immune transportation systems are able to generalize, adapt, and learn based on new knowledge and new information. Applications of the systems are considered for airline yield management, the stochastic vehicle routing, and real-time traffic control at the isolated intersection. The preliminary research results are very promising.

  15. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...

  16. Modeling of a production system using the multi-agent approach

    Science.gov (United States)

    Gwiazda, A.; Sękala, A.; Banaś, W.

    2017-08-01

    other type of agent that are utilized in the described simulation. The article presents the idea of an integrated program approach and shows the resulting production layout as a virtual model. This model was developed in the NetLogo multi-agent program environment.

  17. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  18. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  19. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery.

    Science.gov (United States)

    Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong

    2017-07-01

    Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A diagnosis method for physical systems using a multi-modeling approach

    International Nuclear Information System (INIS)

    Thetiot, R.

    2000-01-01

    In this thesis we propose a method for diagnosis problem solving. This method is based on a multi-modeling approach describing both normal and abnormal behavior of a system. This modeling approach allows to represent a system at different abstraction levels (behavioral, functional and teleological. Fundamental knowledge is described according to a bond-graph representation. We show that bond-graph representation can be exploited in order to generate (completely or partially) the functional models. The different models of the multi-modeling approach allows to define the functional state of a system at different abstraction levels. We exploit this property to exonerate sub-systems for which the expected behavior is observed. The behavioral and functional descriptions of the remaining sub-systems are exploited hierarchically in a two steps process. In a first step, the abnormal behaviors explaining some observations are identified. In a second step, the remaining unexplained observations are used to generate conflict sets and thus the consistency based diagnoses. The modeling method and the diagnosis process have been applied to a Reactor Coolant Pump Sets. This application illustrates the concepts described in this thesis and shows its potentialities. (authors)

  1. Resolving Nuclear Reactor Lifetime Extension Questions: A Combined Multiscale Modeling and Positron Characterization approach

    International Nuclear Information System (INIS)

    Wirth, B; Asoka-Kumar, P; Denison, A; Glade, S; Howell, R; Marian, J; Odette, G; Sterne, P

    2004-01-01

    The objective of this work is to determine the chemical composition of nanometer precipitates responsible for irradiation hardening and embrittlement of reactor pressure vessel steels, which threaten to limit the operating lifetime of nuclear power plants worldwide. The scientific approach incorporates computational multiscale modeling of radiation damage and microstructural evolution in Fe-Cu-Ni-Mn alloys, and experimental characterization by positron annihilation spectroscopy and small angle neutron scattering. The modeling and experimental results are

  2. The Intersystem Model of Psychotherapy: An Integrated Systems Treatment Approach

    Science.gov (United States)

    Weeks, Gerald R.; Cross, Chad L.

    2004-01-01

    This article introduces the intersystem model of psychotherapy and discusses its utility as a truly integrative and comprehensive approach. The foundation of this conceptually complex approach comes from dialectic metatheory; hence, its derivation requires an understanding of both foundational and integrational constructs. The article provides a…

  3. Comparison of different approaches of modelling in a masonry building

    Science.gov (United States)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  4. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2011-04-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different land grids and river nodes are modeled using one dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R–squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  5. Conceptual Model and Numerical Approaches for Unsaturated Zone Flow and Transport

    International Nuclear Information System (INIS)

    H.H. Liu

    2004-01-01

    The purpose of this model report is to document the conceptual and numerical models used for modeling unsaturated zone (UZ) fluid (water and air) flow and solute transport processes. This work was planned in ''Technical Work Plan for: Unsaturated Zone Flow Model and Analysis Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.5, 2.1.1, 2.1.2 and 2.2.1). The conceptual and numerical modeling approaches described in this report are mainly used for models of UZ flow and transport in fractured, unsaturated rock under ambient conditions. Developments of these models are documented in the following model reports: (1) UZ Flow Model and Submodels; (2) Radionuclide Transport Models under Ambient Conditions. Conceptual models for flow and transport in unsaturated, fractured media are discussed in terms of their applicability to the UZ at Yucca Mountain. The rationale for selecting the conceptual models used for modeling of UZ flow and transport is documented. Numerical approaches for incorporating these conceptual models are evaluated in terms of their representation of the selected conceptual models and computational efficiency; and the rationales for selecting the numerical approaches used for modeling of UZ flow and transport are discussed. This report also documents activities to validate the active fracture model (AFM) based on experimental observations and theoretical developments. The AFM is a conceptual model that describes the fracture-matrix interaction in the UZ of Yucca Mountain. These validation activities are documented in Section 7 of this report regarding use of an independent line of evidence to provide additional confidence in the use of the AFM in the UZ models. The AFM has been used in UZ flow and transport models under both ambient and thermally disturbed conditions. Developments of these models are documented

  6. Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach

    Directory of Open Access Journals (Sweden)

    Dan Zhou

    2015-01-01

    Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.

  7. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  8. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  9. QML-AiNet: An immune network approach to learning qualitative differential equation models.

    Science.gov (United States)

    Pang, Wei; Coghill, George M

    2015-02-01

    In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG.

  10. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Sayiter [Engineering Faculty, Cumhuriyet University, Sivas (Turkmenistan)

    2017-09-15

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R{sup 2} value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R{sup 2} values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  11. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    International Nuclear Information System (INIS)

    Yildiz, Sayiter

    2017-01-01

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R"2 value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R"2 values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  12. Comparison of two model approaches in the Zambezi river basin with regard to model reliability and identifiability

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2006-01-01

    Full Text Available Variations of water stocks in the upper Zambezi river basin have been determined by 2 different hydrological modelling approaches. The purpose was to provide preliminary terrestrial storage estimates in the upper Zambezi, which will be compared with estimates derived from the Gravity Recovery And Climate Experiment (GRACE in a future study. The first modelling approach is GIS-based, distributed and conceptual (STREAM. The second approach uses Lumped Elementary Watersheds identified and modelled conceptually (LEW. The STREAM model structure has been assessed using GLUE (Generalized Likelihood Uncertainty Estimation a posteriori to determine parameter identifiability. The LEW approach could, in addition, be tested for model structure, because computational efforts of LEW are low. Both models are threshold models, where the non-linear behaviour of the Zambezi river basin is explained by a combination of thresholds and linear reservoirs. The models were forced by time series of gauged and interpolated rainfall. Where available, runoff station data was used to calibrate the models. Ungauged watersheds were generally given the same parameter sets as their neighbouring calibrated watersheds. It appeared that the LEW model structure could be improved by applying GLUE iteratively. Eventually, it led to better identifiability of parameters and consequently a better model structure than the STREAM model. Hence, the final model structure obtained better represents the true hydrology. After calibration, both models show a comparable efficiency in representing discharge. However the LEW model shows a far greater storage amplitude than the STREAM model. This emphasizes the storage uncertainty related to hydrological modelling in data-scarce environments such as the Zambezi river basin. It underlines the need and potential for independent observations of terrestrial storage to enhance our understanding and modelling capacity of the hydrological processes. GRACE

  13. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Directory of Open Access Journals (Sweden)

    Florian Lesaint

    2014-02-01

    Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  14. Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi

    2014-01-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in

  15. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi

    2014-02-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational

  16. A GOCE-only global gravity field model by the space-wise approach

    DEFF Research Database (Denmark)

    Migliaccio, Frederica; Reguzzoni, Mirko; Gatti, Andrea

    2011-01-01

    The global gravity field model computed by the spacewise approach is one of three official solutions delivered by ESA from the analysis of the GOCE data. The model consists of a set of spherical harmonic coefficients and the corresponding error covariance matrix. The main idea behind this approach...... the orbit to reduce the noise variance and correlation before gridding the data. In the first release of the space-wise approach, based on a period of about two months, some prior information coming from existing gravity field models entered into the solution especially at low degrees and low orders...... degrees; the second is an internally computed GOCE-only prior model to be used in place of the official quick-look model, thus removing the dependency on EIGEN5C especially in the polar gaps. Once the procedure to obtain a GOCE-only solution has been outlined, a new global gravity field model has been...

  17. An approach for modelling interdependent infrastructures in the context of vulnerability analysis

    International Nuclear Information System (INIS)

    Johansson, Jonas; Hassel, Henrik

    2010-01-01

    Technical infrastructures of the society are becoming more and more interconnected and interdependent, i.e. the function of an infrastructure influences the function of other infrastructures. Disturbances in one infrastructure therefore often traverse to other dependent infrastructures and possibly even back to the infrastructure where the failure originated. It is becoming increasingly important to take these interdependencies into account when assessing the vulnerability of technical infrastructures. In the present paper, an approach for modelling interdependent technical infrastructures is proposed. The modelling approach considers structural properties, as employed in graph theory, as well as functional properties to increase its fidelity and usefulness. By modelling a fictional electrified railway network that consists of five systems and interdependencies between the systems, it is shown how the model can be employed in a vulnerability analysis. The model aims to capture both functional and geographic interdependencies. It is concluded that the proposed modelling approach is promising and suitable in the context of vulnerability analyses of interdependent systems.

  18. An evaluation of gas release modelling approaches as to their applicability in fuel behaviour models

    International Nuclear Information System (INIS)

    Mattila, L.J.; Sairanen, R.T.

    1980-01-01

    The release of fission gas from uranium oxide fuel to the voids in the fuel rod affects in many ways the behaviour of LWR fuel rods both during normal operating conditions including anticipated transients and during off-normal and accident conditions. The current trend towards significantly increased discharge burnup of LWR fuel will increase the importance of fission gas release considerations both from the design and safety viewpoints. In the paper fission gas release models are classified to 5 categories on the basis of complexity and physical sophistication. For each category, the basic approach common to the models included in the category is described, a few representative models of the category are singled out and briefly commented in some cases, the advantages and drawbacks of the approach are listed and discussed and conclusions on the practical feasibility of the approach are drawn. The evaluation is based on both literature survey and our experience in working with integral fuel behaviour models. The work has included verification efforts, attempts to improve certain features of the codes and engineering applications. The classification of fission gas release models regarding their applicability in fuel behaviour codes can of course be done only in a coarse manner. The boundaries between the different categories are vague and a model may be well refined in a way which transfers it to a higher category. Some current trends in fuel behaviour research are discussed which seem to motivate further extensive efforts in fission product release modelling and are certain to affect the prioritizing of the efforts. (author)

  19. A fuzzy approach for modelling radionuclide in lake system.

    Science.gov (United States)

    Desai, H K; Christian, R A; Banerjee, J; Patra, A K

    2013-10-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of (3)H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict (3)H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and (3)H concentration at discharge point. The Output was (3)H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    Science.gov (United States)

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.