WorldWideScience

Sample records for modelling approach applied

  1. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  2. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  3. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  4. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  5. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    linear models, ARIMA time series modelling, and dynamic regression are ... to certain environmental parameters, and to investigate the feasibility of .... in the SSA literature, the term noise is used to refer to both stochastic noise, as well as.

  6. Comparison of various modelling approaches applied to cholera case data

    CSIR Research Space (South Africa)

    Van Den Bergh, F

    2008-06-01

    Full Text Available cross-wavelet technique, which is used to compute lead times for co-varying variables, and suggests transformations that enhance co-varying behaviour. Several statistical modelling techniques, including generalised linear models, ARIMA time series...

  7. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  8. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  9. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    The analyses are demonstrated on data collected from Beira, Mozambique. Dynamic regression was found to be the preferred forecasting method for this data set. Keywords:Cholera, modelling, signal processing, dynamic regression, negative binomial regression, wavelet analysis, cross-wavelet analysis. ORiON Vol.

  10. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Batstone, Damien J.; Flores Alsina, Xavier

    2015-01-01

    , the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define...... an Arrhenius-style correction of kcryst. The influence of magnesium (a common and representative added impurity) on kcryst was found to be significant but was considered an optional correction because of a lesser influence as compared to that of temperature. Other variables such as ionic strength and pH were...

  11. Blended Risk Approach in Applying PSA Models to Risk-Based Regulations

    International Nuclear Information System (INIS)

    Dimitrijevic, V. B.; Chapman, J. R.

    1996-01-01

    In this paper, the authors will discuss a modern approach in applying PSA models in risk-based regulation. The Blended Risk Approach is a combination of traditional and probabilistic processes. It is receiving increased attention in different industries in the U. S. and abroad. The use of the deterministic regulations and standards provides a proven and well understood basis on which to assess and communicate the impact of change to plant design and operation. Incorporation of traditional values into risk evaluation is working very well in the blended approach. This approach is very application specific. It includes multiple risk attributes, qualitative risk analysis, and basic deterministic principles. In blending deterministic and probabilistic principles, this approach ensures that the objectives of the traditional defense-in-depth concept are not compromised and the design basis of the plant is explicitly considered. (author)

  12. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    Science.gov (United States)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  13. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  14. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  15. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  16. Does the interpersonal model apply across eating disorder diagnostic groups? A structural equation modeling approach.

    Science.gov (United States)

    Ivanova, Iryna V; Tasca, Giorgio A; Proulx, Geneviève; Bissada, Hany

    2015-11-01

    Interpersonal model has been validated with binge-eating disorder (BED), but it is not yet known if the model applies across a range of eating disorders (ED). The goal of this study was to investigate the validity of the interpersonal model in anorexia nervosa (restricting type; ANR and binge-eating/purge type; ANBP), bulimia nervosa (BN), BED, and eating disorder not otherwise specified (EDNOS). Data from a cross-sectional sample of 1459 treatment-seeking women diagnosed with ANR, ANBP, BN, BED and EDNOS were examined for indirect effects of interpersonal problems on ED psychopathology mediated through negative affect. Findings from structural equation modeling demonstrated the mediating role of negative affect in four of the five diagnostic groups. There were significant, medium to large (.239, .558), indirect effects in the ANR, BN, BED and EDNOS groups but not in the ANBP group. The results of the first reverse model of interpersonal problems as a mediator between negative affect and ED psychopathology were nonsignificant, suggesting the specificity of these hypothesized paths. However, in the second reverse model ED psychopathology was related to interpersonal problems indirectly through negative affect. This is the first study to find support for the interpersonal model of ED in a clinical sample of women with diverse ED diagnoses, though there may be a reciprocal relationship between ED psychopathology and relationship problems through negative affect. Negative affect partially explains the relationship between interpersonal problems and ED psychopathology in women diagnosed with ANR, BN, BED and EDNOS. Interpersonal psychotherapies for ED may be addressing the underlying interpersonal-affective difficulties, thereby reducing ED psychopathology. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Theoretical modeling of electroosmotic flow in soft microchannels: A variational approach applied to the rectangular geometry

    Science.gov (United States)

    Sadeghi, Arman

    2018-03-01

    Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.

  18. A single grain approach applied to modelling recrystallization kinetics in a single-phase metal

    NARCIS (Netherlands)

    Chen, S.P.; Zwaag, van der S.

    2004-01-01

    A comprehensive model for the recrystallization kinetics is proposed which incorporates both microstructure and the textural components in the deformed state. The model is based on the single-grain approach proposed previously. The influence of the as-deformed grain orientation, which affects the

  19. A Cointegrated Regime-Switching Model Approach with Jumps Applied to Natural Gas Futures Prices

    Directory of Open Access Journals (Sweden)

    Daniel Leonhardt

    2017-09-01

    Full Text Available Energy commodities and their futures naturally show cointegrated price movements. However, there is empirical evidence that the prices of futures with different maturities might have, e.g., different jump behaviours in different market situations. Observing commodity futures over time, there is also evidence for different states of the underlying volatility of the futures. In this paper, we therefore allow for cointegration of the term structure within a multi-factor model, which includes seasonality, as well as joint and individual jumps in the price processes of futures with different maturities. The seasonality in this model is realized via a deterministic function, and the jumps are represented with thinned-out compound Poisson processes. The model also includes a regime-switching approach that is modelled through a Markov chain and extends the class of geometric models. We show how the model can be calibrated to empirical data and give some practical applications.

  20. Developing scenarios to assess future landslide risks: a model-based approach applied to mountainous regions

    Science.gov (United States)

    Vacquie, Laure; Houet, Thomas

    2016-04-01

    In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging

  1. Equivalent electrical network model approach applied to a double acting low temperature differential Stirling engine

    International Nuclear Information System (INIS)

    Formosa, Fabien; Badel, Adrien; Lottin, Jacques

    2014-01-01

    Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications

  2. Capturing ecology in modeling approaches applied to environmental risk assessment of endocrine active chemicals in fish.

    Science.gov (United States)

    Mintram, Kate S; Brown, A Ross; Maynard, Samuel K; Thorbek, Pernille; Tyler, Charles R

    2018-02-01

    Endocrine active chemicals (EACs) are widespread in freshwater environments and both laboratory and field based studies have shown reproductive effects in fish at environmentally relevant exposures. Environmental risk assessment (ERA) seeks to protect wildlife populations and prospective assessments rely on extrapolation from individual-level effects established for laboratory fish species to populations of wild fish using arbitrary safety factors. Population susceptibility to chemical effects, however, depends on exposure risk, physiological susceptibility, and population resilience, each of which can differ widely between fish species. Population models have significant potential to address these shortfalls and to include individual variability relating to life-history traits, demographic and density-dependent vital rates, and behaviors which arise from inter-organism and organism-environment interactions. Confidence in population models has recently resulted in the EU Commission stating that results derived from reliable models may be considered when assessing the relevance of adverse effects of EACs at the population level. This review critically assesses the potential risks posed by EACs for fish populations, considers the ecological factors influencing these risks and explores the benefits and challenges of applying population modeling (including individual-based modeling) in ERA for EACs in fish. We conclude that population modeling offers a way forward for incorporating greater environmental relevance in assessing the risks of EACs for fishes and for identifying key risk factors through sensitivity analysis. Individual-based models (IBMs) allow for the incorporation of physiological and behavioral endpoints relevant to EAC exposure effects, thus capturing both direct and indirect population-level effects.

  3. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  4. Economic and ecological impacts of bioenergy crop production—a modeling approach applied in Southwestern Germany

    Directory of Open Access Journals (Sweden)

    Hans-Georg Schwarz-v. Raumer

    2017-03-01

    Full Text Available This paper considers scenarios of cultivating energy crops in the German Federal State of Baden-Württemberg to identify potentials and limitations of a sustainable bioenergy production. Trade-offs are analyzed among income and production structure in agriculture, bioenergy crop production, greenhouse gas emissions, and the interests of soil, water and species habitat protection. An integrated modelling approach (IMA was implemented coupling ecological and economic models in a model chain. IMA combines the Economic Farm Emission Model (EFEM; key input: parameter sets on farm production activities, the Environmental Policy Integrated Climate model (EPIC; key input: parameter sets on environmental cropping effects and GIS geo-processing models. EFEM is a supply model that maximizes total gross margins on farm level with simultaneous calculation of greenhouse gas emission from agriculture production. Calculations by EPIC result in estimates for soil erosion by water, nitrate leaching, Soil Organic Carbon and greenhouse gas emissions from soil. GIS routines provide land suitability analyses, scenario settings concerning nature conservation and habitat models for target species and help to enable spatial explicit results. The model chain is used to calculate scenarios representing different intensities of energy crop cultivation. To design scenarios which are detailed and in step to practice, comprehensive data research as well as fact and effect analyses were carried out. The scenarios indicate that, not in general but when considering specific farm types, energy crop share extremely increases if not restricted and leads to an increase in income. If so this leads to significant increase in soil erosion by water, nitrate leaching and greenhouse gas emissions. It has to be expected that an extension of nature conservation leads to an intensification of the remaining grassland and of the arable land, which were not part of nature conservation measures

  5. Applying of an Ontology based Modeling Approach to Cultural Heritage Systems

    Directory of Open Access Journals (Sweden)

    POPOVICI, D.-M.

    2011-08-01

    Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.

  6. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R; Braun, Alexandre Luis; Awruch, Armando Miguel; Dalcin, Lisandro

    2015-01-01

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  7. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  8. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    Science.gov (United States)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool

  9. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy

  10. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

  11. Bio-economic modeling of water quality improvements using a dynamic applied general equilibrium approach

    NARCIS (Netherlands)

    Dellink, R.; Brouwer, R.; Linderhof, V.G.M.; Stone, K.

    2011-01-01

    An integrated bio-economic model is developed to assess the impacts of pollution reduction policies on water quality and the economy. Emission levels of economic activities to water are determined based on existing environmental accounts. These emission levels are built into a dynamic economic model

  12. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Science.gov (United States)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  13. Regression models for categorical, count, and related variables an applied approach

    CERN Document Server

    Hoffmann, John P

    2016-01-01

    Social science and behavioral science students and researchers are often confronted with data that are categorical, count a phenomenon, or have been collected over time. Sociologists examining the likelihood of interracial marriage, political scientists studying voting behavior, criminologists counting the number of offenses people commit, health scientists studying the number of suicides across neighborhoods, and psychologists modeling mental health treatment success are all interested in outcomes that are not continuous. Instead, they must measure and analyze these events and phenomena in a discrete manner.   This book provides an introduction and overview of several statistical models designed for these types of outcomes--all presented with the assumption that the reader has only a good working knowledge of elementary algebra and has taken introductory statistics and linear regression analysis.   Numerous examples from the social sciences demonstrate the practical applications of these models. The chapte...

  14. Chemical, spectroscopic, and ab initio modelling approach to interfacial reactivity applied to anion retention by siderite

    International Nuclear Information System (INIS)

    Badaut, V.

    2010-07-01

    Among the many radionuclides contained in high-level nuclear waste, 79 Se was identified as a potential threat to the safety of long term underground storage. However, siderite (FeCO 3 ) is known to form upon corrosion of the waste container, and the impact of this mineral on the fate of selenium was not accounted for. In this work, the interactions between selenium oxyanions - selenate and selenite - and siderite were investigated. To this end, both experimental characterizations (solution chemistry, X-ray Absorption Spectroscopy - XAS) and theoretical studies (ab initio modelling using Density Functional Theory - DFT ) were performed. Selenite and selenate (≤ 10 3 M) retention experiments by siderite suspensions (75 g/L ) at neutral pH in reducing glovebox (5 % H 2 ) showed that selenite is quantitatively immobilized by siderite after 48 h of reaction time, when selenate is only partly immobilized after 10 days. In the selenite case, XAS showed that immobilized selenium is initially present as Se(IV) probably sorbed on siderite surface. After 10 days of reaction, selenite ions are quantitatively reduced and form poorly crystalline elementary selenium. Selenite retention and reduction kinetics are therefore distinct. On the other hand, the fraction of immobilized selenate retained in the solid fraction does not appear to be significantly reduced over the probed timescale (10 days). For a better understanding of the reduction mechanism of selenite ions by siderite, the properties of bulk and perfect surfaces of siderite were modelled using DFT. We suggest that the properties of the valence electrons can be correctly described only if the symmetry of the fundamental state electronic density is lower than the experimental crystallographic symmetry. We then show that the retention of simple molecules as O 2 or H 2 O on siderite and magnesite (10 -14 ) perfect surfaces (perfect cleavage plane, whose surface energy is the lowest according to DFT) can be modelled with

  15. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  16. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  17. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    Directory of Open Access Journals (Sweden)

    E. Larour

    2016-11-01

    Full Text Available Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly. However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1 carry out type changing through the ISSM, hence facilitating operator overloading, (2 bind to external solvers such as MUMPS and GSL-LU, and (3 handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  18. Applied impulsive mathematical models

    CERN Document Server

    Stamova, Ivanka

    2016-01-01

    Using the theory of impulsive differential equations, this book focuses on mathematical models which reflect current research in biology, population dynamics, neural networks and economics. The authors provide the basic background from the fundamental theory and give a systematic exposition of recent results related to the qualitative analysis of impulsive mathematical models. Consisting of six chapters, the book presents many applicable techniques, making them available in a single source easily accessible to researchers interested in mathematical models and their applications. Serving as a valuable reference, this text is addressed to a wide audience of professionals, including mathematicians, applied researchers and practitioners.

  19. A steady state thermal duct model derived by fin-theory approach and applied on an unglazed solar collector

    Energy Technology Data Exchange (ETDEWEB)

    Stojanovic, B.; Hallberg, D.; Akander, J. [Building Materials Technology, KTH Research School, Centre for Built Environment, University of Gaevle, SE-801 76 Gaevle (Sweden)

    2010-10-15

    This paper presents the thermal modelling of an unglazed solar collector (USC) flat panel, with the aim of producing a detailed yet swift thermal steady-state model. The model is analytical, one-dimensional (1D) and derived by a fin-theory approach. It represents the thermal performance of an arbitrary duct with applied boundary conditions equal to those of a flat panel collector. The derived model is meant to be used for efficient optimisation and design of USC flat panels (or similar applications), as well as detailed thermal analysis of temperature fields and heat transfer distributions/variations at steady-state conditions; without requiring a large amount of computational power and time. Detailed surface temperatures are necessary features for durability studies of the surface coating, hence the effect of coating degradation on USC and system performance. The model accuracy and proficiency has been benchmarked against a detailed three-dimensional Finite Difference Model (3D FDM) and two simpler 1D analytical models. Results from the benchmarking test show that the fin-theory model has excellent capabilities of calculating energy performances and fluid temperature profiles, as well as detailed material temperature fields and heat transfer distributions/variations (at steady-state conditions), while still being suitable for component analysis in junction to system simulations as the model is analytical. The accuracy of the model is high in comparison to the 3D FDM (the prime benchmark), as long as the fin-theory assumption prevails (no 'or negligible' temperature gradient in the fin perpendicularly to the fin length). Comparison with the other models also shows that when the USC duct material has a high thermal conductivity, the cross-sectional material temperature adopts an isothermal state (for the assessed USC duct geometry), which makes the 1D isothermal model valid. When the USC duct material has a low thermal conductivity, the heat transfer

  20. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  1. Patient perception of nursing service quality; an applied model of Donabedian's structure-process-outcome approach theory.

    Science.gov (United States)

    Kobayashi, Hideyuki; Takemura, Yukie; Kanda, Katsuya

    2011-09-01

    Nursing is a labour-intensive field, and an extensive amount of latent information exists to aid in evaluating the quality of nursing service, with patients' experiences, the primary focus of such evaluations. To effect further improvement in nursing as well as medical care, Donabedian's structure-process-outcome approach has been applied. To classify and confirm patients' specific experiences with regard to nursing service based on Donabedian's structure-process-outcomes model for improving the quality of nursing care. Items were compiled from existing scales and assigned to structure, process or outcomes in Donabedian's model through discussion among expert nurses and pilot data collection. With regard to comfort, surroundings were classified as structure (e.g. accessibility to nurses, disturbance); with regard to patient-practitioner interaction, patient participation was classified as a process (e.g. expertise and skill, patient decision-making); and with regard to changes in patients, satisfaction was classified as an outcome (e.g. information support, overall satisfaction). Patient inquiry was carried out using the finalized questionnaire at general wards in Japanese hospitals in 2005-2006. Reliability and validity were tested using psychometric methods. Data from 1,810 patients (mean age: 59.7 years; mean length of stay: 23.7 days) were analysed. Internal consistency reliability was supported (α = 0.69-0.96), with factor analysis items of structure aggregated to one factor and overall satisfaction under outcome aggregated to one. The remaining items of outcome and process were distributed together in two factors. Inter-scale correlation (r = 0.442-0.807) supported the construct validity of each structure-process-outcome approach. All structure items were represented as negative-worded examples, as they dealt with basic conditions under Japanese universal health care system, and were regarded as representative related to concepts of dissatisfaction and no

  2. multi-scale data assimilation approaches and error characterisation applied to the inverse modelling of atmospheric constituent emission fields

    International Nuclear Information System (INIS)

    Koohkan, Mohammad Reza

    2012-01-01

    Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds

  3. Quantitative assessment of key parameters in qualitative vulnerability methods applied in karst systems based on an integrated numerical modelling approach

    Science.gov (United States)

    Doummar, Joanna; Kassem, Assaad

    2017-04-01

    In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.

  4. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Ziras, Charalampos

    2016-01-01

    The design and implementation of management policies for plug-in electric vehicles (PEVs) need to be supported by a holistic understanding of the functional processes, their complex interactions, and their response to various changes. Models developed to represent different functional processes...... and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators......, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging...

  5. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    Directory of Open Access Journals (Sweden)

    Shi You

    2016-11-01

    Full Text Available The design and implementation of management policies for plug-in electric vehicles (PEVs need to be supported by a holistic understanding of the functional processes, their complex interactions, and their response to various changes. Models developed to represent different functional processes and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions of using existing, or developing their own, solutions for further applications.

  6. Geological modeling by an indicator kriging approach applied to a limestone deposit in Indiara city - Goiás

    Directory of Open Access Journals (Sweden)

    Paulo Elias Carneiro Pereira

    Full Text Available Abstract The mineral exploration activity consists of a set of successive stages that are interdependent on each other, in which the main goal is to discover and subsequently evaluate a mineral deposit for the feasibility of its extraction. This process involves setting the shape, dimensions and grades for eventual production. Geological modeling determines the orebody's possible format in subsoil, which can be done by two approaches: vertical sections (deterministic methods or geostatistical methods. The latter approach is currently being preferred, as it is a more accurate alternative and therefore, more reliable for establishing the physical format of orebodies, especially in instances where geologic boundaries are soft and/or with widely spaced sample information. This study uses the concept of indicator kriging (IK to model the geologic boundaries of a limestone deposit located at Indiara city, Goiás State, Brazil. In general, the results indicated a good adherence in relation to samples. However, there are reasonable differences, particularly in lithological domains with a small number of samples in relation to the total amount sampled. Therefore, the results showed that there is a need for additional sampling to better delineate the geological contacts, especially between carbonate and non-carbonate rocks. Uncertainty maps confirmed this necessity and also indicated potential sites for future sampling; information that would not be obtained by usage of deterministic methods.

  7. Applying a System Dynamics Approach for Modeling Groundwater Dynamics to Depletion under Different Economical and Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Hamid Balali

    2015-09-01

    Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.

  8. A Markovian Approach Applied to Reliability Modeling of Bidirectional DC-DC Converters Used in PHEVs and Smart Grids

    Directory of Open Access Journals (Sweden)

    M. Khalilzadeh

    2016-12-01

    Full Text Available In this paper, a stochastic approach is proposed for reliability assessment of bidirectional DC-DC converters, including the fault-tolerant ones. This type of converters can be used in a smart DC grid, feeding DC loads such as home appliances and plug-in hybrid electric vehicles (PHEVs. The reliability of bidirectional DC-DC converters is of such an importance, due to the key role of the expected increasingly utilization of DC grids in modern Smart Grid. Markov processes are suggested for reliability modeling and consequently calculating the expected effective lifetime of bidirectional converters. A three-leg bidirectional interleaved converter using data of Toyota Prius 2012 hybrid electric vehicle is used as a case study. Besides, the influence of environment and ambient temperature on converter lifetime is studied. The impact of modeling the reliability of the converter and adding reliability constraints on the technical design procedure of the converter is also investigated. In order to investigate the effect of leg increase on the lifetime of the converter, single leg to five-leg interleave DC-DC converters are studied considering economical aspect and the results are extrapolated for six and seven-leg converters. The proposed method could be generalized so that the number of legs and input and output capacitors could be an arbitrary number.

  9. Applying the theory of planned behavior to self-report dental attendance in Norwegian adults through structural equation modelling approach.

    Science.gov (United States)

    Åstrøm, Anne N; Lie, Stein Atle; Gülcan, Ferda

    2018-05-31

    Understanding factors that affect dental attendance behavior helps in constructing effective oral health campaigns. A socio-cognitive model that adequately explains variance in regular dental attendance has yet to be validated among younger adults in Norway. Focusing a representative sample of younger Norwegian adults, this cross-sectional study provided an empirical test of the Theory of Planned Behavior (TPB) augmented with descriptive norm and action planning and estimated direct and indirect effects of attitudes, subjective norms, descriptive norms, perceived behavioral control and action planning on intended and self-reported regular dental attendance. Self-administered questionnaires provided by 2551, 25-35 year olds, randomly selected from the Norwegian national population registry were used to assess socio-demographic factors, dental attendance as well as the constructs of the augmented TPB model (attitudes, subjective norms, descriptive norms, intention, action planning). A two-stage process of structural equation modelling (SEM) was used to test the augmented TPB model. Confirmatory factor analysis, CFA, confirmed the proposed correlated 6-factor measurement model after re-specification. SEM revealed that attitudes, perceived behavioral control, subjective norms and descriptive norms explained intention. The corresponding standardized regression coefficients were respectively (β = 0.70), (β =0.18), (β = - 0.17) and (β =0.11) (p planning and action planning (β =0.19) predicted dental attendance behavior (p behavioral control on behavior through action planning and through intention and action planning, respectively. The final model explained 64 and 41% of the total variance in intention and dental attendance behavior. The findings support the utility of the TPB, the expanded normative component and action planning in predicting younger adults' intended- and self-reported dental attendance. Interventions targeting young adults' dental

  10. From basic physics to mechanisms of toxicity: the ``liquid drop'' approach applied to develop predictive classification models for toxicity of metal oxide nanoparticles

    Science.gov (United States)

    Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy

    2014-10-01

    Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the ``liquid drop'' model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were

  11. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  12. Evaluating tidal marsh sustainability in the face of sea-level rise: a hybrid modeling approach applied to San Francisco Bay.

    Directory of Open Access Journals (Sweden)

    Diana Stralberg

    Full Text Available Tidal marshes will be threatened by increasing rates of sea-level rise (SLR over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities.Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios.Model results indicated that under a high rate of SLR (1.65 m/century, short-term restoration of diked subtidal baylands to mid marsh elevations (-0.2 m MHHW could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss. Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats.Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise elevations, and concentrating restoration efforts in sediment-rich areas

  13. Applying discursive approaches to health psychology.

    Science.gov (United States)

    Seymour-Smith, Sarah

    2015-04-01

    The aim of this paper is to outline the contribution of two strands of discursive research, glossed as 'macro' and 'micro,' to the field of health psychology. A further goal is to highlight some contemporary debates in methodology associated with the use of interview data versus more naturalistic data in qualitative health research. Discursive approaches provide a way of analyzing talk as a social practice that considers how descriptions are put together and what actions they achieve. A selection of recent examples of discursive research from one applied area of health psychology, studies of diet and obesity, are drawn upon in order to illustrate the specifics of both strands. 'Macro' discourse work in psychology incorporates a Foucauldian focus on the way that discourses regulate subjectivities, whereas the concept of interpretative repertoires affords more agency to the individual: both are useful for identifying the cultural context of talk. Both 'macro' and 'micro' strands focus on accountability to varying degrees. 'Micro' Discursive Psychology, however, pays closer attention to the sequential organization of constructions and focuses on naturalistic settings that allow for the inclusion of an analysis of the health professional. Diets are typically depicted as an individual responsibility in mainstream health psychology, but discursive research highlights how discourses are collectively produced and bound up with social practices. (c) 2015 APA, all rights reserved).

  14. IRECCSEM: Evaluating Clare Basin potential for onshore carbon sequestration using magnetotelluric data (Preliminary results). New approaches applied for processing, modeling and interpretation

    Science.gov (United States)

    Campanya i Llovet, J.; Ogaya, X.; Jones, A. G.; Rath, V.

    2014-12-01

    The IRECCSEM project (www.ireccsem.ie) is a Science Foundation Ireland Investigator Project that is funded to evaluate Ireland's potential for onshore carbon sequestration in saline aquifers by integrating new electromagnetic data with existing geophysical and geological data. The main goals of the project are to determine porosity-permeability values of the potential reservoir formation as well as to evaluate the integrity of the seal formation. During the Summer of 2014 a magnetotelluric (MT) survey was carried out at the Clare basin (Ireland). A total of 140 sites were acquired including audiomagnetotelluric (AMT), broadband magnetotelluric (BBMT) and long period magnetotelluric (LMT) data. The nominal space between sites is 0.6 km for AMT sites, 1.2 km for BBMT sites and 8 km for LMT sites. To evaluate the potential for carbon sequestration of the Clare basin three advances on geophysical methodology related to electromagnetic techniques were applied. First of all, processing of the MT data was improved following the recently published ELICIT methodology. Secondly, during the inversion process, the electrical resistivity distribution of the subsurface was constrained combining three different tensor relationships: Impedances (Z), induction arrows (TIP) and multi-site horizontal magnetic transfer-functions (HMT). Results from synthetic models were used to evaluate the sensitivity and properties of each tensor relationship. Finally, a computer code was developed, which employs a stabilized least squares approach to estimate the cementation exponent in the generalized Archie law formulated by Glover (2010). This allows relating MT-derived electrical resistivity models to porosity distributions. The final aim of this procedure is to generalize the porosity - permeability values measured in the boreholes to regional scales. This methodology will contribute to the evaluation of possible sequestration targets in the study area.

  15. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    Science.gov (United States)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  16. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  17. MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACH

    OpenAIRE

    Omar AlSheikSalem

    2016-01-01

    In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating healthcare applications and services is one of the vast data approaches that can be adapted to mobile cloud computing. This work proposes a framework of a global healthcare computing based combining both mobile computing and cloud computing. This approach leads to integrate all of ...

  18. Commercial Consolidation Model Applied to Transport Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Guilherme de Aragão, J.J.; Santos Fontes Pereira, L. dos; Yamashita, Y.

    2016-07-01

    Since the 1990s, transport concessions, including public-private partnerships (PPPs), have been increasingly adopted by governments as an alternative for financing and operations in public investments, especially in transport infrastructure. The advantage pointed out by proponents of these models lies in merging the expertise and capital of the private sector to the public interest. Several arrangements are possible and have been employed in different cases. After the duration of the first PPP contracts in transportation, many authors have analyzed the success and failure factors of partnerships. The occurrence of failures in some stages of the process can greatly encumber the public administration, incurring losses to the fiscal responsibility of the competent bodies. This article aims to propose a new commercial consolidation model applied to transport infrastructure to ensure fiscal sustainability and overcome the weaknesses of current models. Initially, a systematic review of the literature covering studies on transport concessions between 1990 and 2015 is offered, where the different approaches between various countries are compared and the critical success factors indicated in the studies are identified. In the subsequent part of the paper, an approach for the commercial consolidation of the infrastructure concessions is presented, where the concessionary is paid following a finalistic performance model, which includes the overall fiscal balance of regional growth. Finally, the papers analyses the usefulness of the model in coping with the critical success factors explained before. (Author)

  19. State space model extraction of thermohydraulic systems – Part II: A linear graph approach applied to a Brayton cycle-based power conversion unit

    International Nuclear Information System (INIS)

    Uren, Kenneth Richard; Schoor, George van

    2013-01-01

    This second paper in a two part series presents the application of a developed state space model extraction methodology applied to a Brayton cycle-based PCU (power conversion unit) of a PBMR (pebble bed modular reactor). The goal is to investigate if the state space extraction methodology can cope with larger and more complex thermohydraulic systems. In Part I the state space model extraction methodology for the purpose of control was described in detail and a state space representation was extracted for a U-tube system to illustrate the concept. In this paper a 25th order nonlinear state space representation in terms of the different energy domains is extracted. This state space representation is solved and the responses of a number of important states are compared with results obtained from a PBMR PCU Flownex ® model. Flownex ® is a validated thermo fluid simulation software package. The results show that the state space model closely resembles the dynamics of the PBMR PCU. This kind of model may be used for nonlinear MIMO (multi-input, multi-output) type of control strategies. However, there is still a need for linear state space models since many control system design and analysis techniques require a linear state space model. This issue is also addressed in this paper by showing how a linear state space model can be derived from the extracted nonlinear state space model. The linearised state space model is also validated by comparing the state space model to an existing linear Simulink ® model of the PBMR PCU system. - Highlights: • State space model extraction of a pebble bed modular reactor PCU (power conversion unit). • A 25th order nonlinear time varying state space model is obtained. • Linearisation of a nonlinear state space model for use in power output control. • Non-minimum phase characteristic that is challenging in terms of control. • Models derived are useful for MIMO control strategies

  20. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    NARCIS (Netherlands)

    Postma, Maarten J.; Jit, Mark; Rozenbaum, Mark H.; Standaert, Baudouin; Tu, Hong-Anh; Hutubessy, Raymond C. W.

    2011-01-01

    Background: This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods: We identified

  1. Applying a gaming approach to IP strategy.

    Science.gov (United States)

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.

  2. Critical Applied Linguistics: An Evaluative Interdisciplinary Approach in Criticism and Evaluation of Applied Linguistics’ Disciplines

    OpenAIRE

    H. Davari

    2015-01-01

    The emergence of some significant critical approaches and directions in the field of applied linguistics from the mid-1980s onwards has met with various positive and opposite reactions. On the basis of their strength and significance, such approaches and directions have challenged some of the mainstream approaches’ claims, principles and assumptions. Among them, critical applied linguistics can be highlighted as a new approach, developed by the Australian applied linguist, Alastair Pennycook....

  3. Applying the Sport Education Model to Tennis

    Science.gov (United States)

    Ayvazo, Shiri

    2009-01-01

    The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…

  4. Potential-splitting approach applied to the Temkin-Poet model for electron scattering off the hydrogen atom and the helium ion

    Science.gov (United States)

    Yarevsky, E.; Yakovlev, S. L.; Larson, Å; Elander, N.

    2015-06-01

    The study of scattering processes in few body systems is a difficult problem especially if long range interactions are involved. In order to solve such problems, we develop here a potential-splitting approach for three-body systems. This approach is based on splitting the reaction potential into a finite range core part and a long range tail part. The solution to the Schrödinger equation for the long range tail Hamiltonian is found analytically, and used as an incoming wave in the three body scattering problem. This reformulation of the scattering problem makes it suitable for treatment by the exterior complex scaling technique in the sense that the problem after the complex dilation is reduced to a boundary value problem with zero boundary conditions. We illustrate the method with calculations on the electron scattering off the hydrogen atom and the positive helium ion in the frame of the Temkin-Poet model.

  5. A nonlinear interface model applied to masonry structures

    Science.gov (United States)

    Lebon, Frédéric; Raffa, Maria Letizia; Rizzoni, Raffaella

    2015-12-01

    In this paper, a new imperfect interface model is presented. The model includes finite strains, micro-cracks and smooth roughness. The model is consistently derived by coupling a homogenization approach for micro-cracked media and arguments of asymptotic analysis. The model is applied to brick/mortar interfaces. Numerical results are presented.

  6. A novel approach for modeling malaria incidence using complex categorical household data: The minimum message length (MML method applied to Indonesian data

    Directory of Open Access Journals (Sweden)

    Gerhard Visser

    2012-09-01

    Full Text Available We investigated the application of a Minimum Message Length (MML modeling approach to identify the simplest model that would explain two target malaria incidence variables: incidence in the short term and on the average longer term, in two areas in Indonesia, based on a range of ecological variables including environmental and socio-economic ones. The approach is suitable for dealing with a variety of problems such as complexity and where there are missing values in the data. It can detect weak relations, is resistant to overfittingand can show the way in which many variables, working together, contribute to explaining malaria incidence. This last point is a major strength of the method as it allows many variables to be analysed. Data were obtained at household level by questionnaire for villages in West Timor and Central Java. Data were collected on 26 variables in nine categories: stratum (a village-level variable based on the API/AMI categories, ecology, occupation, preventative measures taken, health care facilities, the immediate environment, household characteristics, socio-economic status and perception of malaria cause. Several models were used and the simplest (best model, that is the one with the minimum message length was selected for each area. The results showed that consistent predictors of malaria included combinations of ecology (coastal, preventative (clean backyard and environment (mosquito breeding place, garden and rice cultivation. The models also showed that most of the other variables were not good predictors and this is discussed in the paper. We conclude that the method has potential for identifying simple predictors of malaria and that it could be used to focus malaria management on combinations of variables rather than relying on single ones that may not be consistently reliable.

  7. Critical Applied Linguistics: An Evaluative Interdisciplinary Approach in Criticism and Evaluation of Applied Linguistics’ Disciplines

    Directory of Open Access Journals (Sweden)

    H. Davari

    2015-11-01

    Full Text Available The emergence of some significant critical approaches and directions in the field of applied linguistics from the mid-1980s onwards has met with various positive and opposite reactions. On the basis of their strength and significance, such approaches and directions have challenged some of the mainstream approaches’ claims, principles and assumptions. Among them, critical applied linguistics can be highlighted as a new approach, developed by the Australian applied linguist, Alastair Pennycook. The aspects, domains and concerns of this new approach were introduced in his book in 2001. Due to the undeniable importance of this approach, as well as partial negligence regarding it in Iranian academic setting, this paper first intends to introduce this approach, as an approach that evaluates various disciplines of applied linguistics through its own specific principles and interests. Then, in order to show its step-by-step application in the evaluation of different disciplines of applied linguistics, with a glance at its significance and appropriateness in Iranian society, two domains, namely English language education and language policy and planning, are introduced and evaluated in order to provide readers with a visible and practical picture of its interdisciplinary nature and evaluative functions. The findings indicate the efficacy of applying this interdisciplinary framework in any language-in-education policy and planning in accordance with the political, social and cultural context of the target society.

  8. Applying lessons from the ecohealth approach to make food ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Applying lessons from the ecohealth approach to make food systems healthier ... the biennial Ecohealth Congress of the International Association for Ecology and ... intersectoral policies that address the notable increase in obesity, diabetes, ...

  9. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  10. Effects of stand composition and thinning in mixed-species forests : a modeling approach applied to Douglas-fir and beech

    NARCIS (Netherlands)

    Bartelink, H.H.

    2000-01-01

    Models estimating growth and yield of forest stands provide important tools for forest management. Pure stands have been modeled extensively and successfully for decades; however, relatively few models for mixed-species stands have been developed. A spatially explicit, mechanistic model (COMMIX) is

  11. Applying the WEAP Model to Water Resource

    DEFF Research Database (Denmark)

    Gao, Jingjing; Christensen, Per; Li, Wei

    efficiency, treatment and reuse of water. The WEAP model was applied to the Ordos catchment where it was used for the first time in China. The changes in water resource utilization in Ordos basin were assessed with the model. It was found that the WEAP model is a useful tool for water resource assessment......Water resources assessment is a tool to provide decision makers with an appropriate basis to make informed judgments regarding the objectives and targets to be addressed during the Strategic Environmental Assessment (SEA) process. The study shows how water resources assessment can be applied in SEA...... in assessing the effects on water resources using a case study on a Coal Industry Development Plan in an arid region in North Western China. In the case the WEAP model (Water Evaluation And Planning System) were used to simulate various scenarios using a diversity of technological instruments like irrigation...

  12. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  13. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  14. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  15. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  16. Undiscovered Resource Modelling: Towards Applying a Systematic Approach to Uranium or How Much Uranium is Left and Where Might It Be Found?

    International Nuclear Information System (INIS)

    Fairclough, Martin; Katona, Laz

    2014-01-01

    Uranium Resource Modelling: Why do we want to plan for it? Purely from a supply-demand perspective: 1) Current supplies (at mid-range demand scenario) only enough until 2035 (likely to increase due to reactor shut down/stockpiling); 2) Not all uranium will be brought into production; 3) Long lead in times (particularly) for U mines; 4) Projections to 2060 (beyond IR) e.g IAEA TECDOC). From a socio-economic perspective: 1) Need for financial analysis; 2) Need for comparison with other land uses; 3) Need for comparison with other tracts of land; 4) Need for consideration of economic/environmental consequences of possible development; 5) Security of supply!!!

  17. Terahertz spectroscopy applied to food model systems

    DEFF Research Database (Denmark)

    Møller, Uffe

    Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult...... to differentiate between these types of water in subsequent quality controls. This thesis describes terahertz time-domain spectroscopy applied on aqueous food model systems, with particular focus on ethanol-water mixtures and confined water pools in inverse micelles....

  18. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  19. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  20. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    Science.gov (United States)

    2002-03-07

    local conformations [38]. Moreover, all these models have the same theme in trying to define the properties a real protein has when folding. Today , it...attempted to solve the PSP problem with a real valued GA and found better results than a competitor (Scheraga, et al) [50]; however, today we know that...ACM Symposium on Applied computing (SAC01) (March 11-14 2001). Las Vegas, Nevada. [22] Derrida , B. “Random Energy Model: Limit of a Family of

  1. Applying Digital Sensor Technology: A Problem-Solving Approach

    Science.gov (United States)

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  2. The hybrid thermography approach applied to architectural structures

    Science.gov (United States)

    Sfarra, S.; Ambrosini, D.; Paoletti, D.; Nardi, I.; Pasqualoni, G.

    2017-07-01

    This work contains an overview of infrared thermography (IRT) method and its applications relating to the investigation of architectural structures. In this method, the passive approach is usually used in civil engineering, since it provides a panoramic view of the thermal anomalies to be interpreted also thanks to the use of photographs focused on the region of interest (ROI). The active approach, is more suitable for laboratory or indoor inspections, as well as for objects having a small size. The external stress to be applied is thermal, coming from non-natural apparatus such as lamps or hot / cold air jets. In addition, the latter permits to obtain quantitative information related to defects not detectable to the naked eyes. Very recently, the hybrid thermography (HIRT) approach has been introduced to the attention of the scientific panorama. It can be applied when the radiation coming from the sun, directly arrives (i.e., possibly without the shadow cast effect) on a surface exposed to the air. A large number of thermograms must be collected and a post-processing analysis is subsequently applied via advanced algorithms. Therefore, an appraisal of the defect depth can be obtained passing through the calculation of the combined thermal diffusivity of the materials above the defect. The approach is validated herein by working, in a first stage, on a mosaic sample having known defects while, in a second stage, on a Church built in L'Aquila (Italy) and covered with a particular masonry structure called apparecchio aquilano. The results obtained appear promising.

  3. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  4. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  5. Tennis: Applied Examples of a Game-Based Teaching Approach

    Science.gov (United States)

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  6. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  7. Applying a Problem Based Learning Approach to Land Management Education

    DEFF Research Database (Denmark)

    Enemark, Stig

    Land management covers a wide range activities associated with the management of land and natural resources that are required to fulfil political objectives and achieve sustainable development. This paper presents an overall understanding of the land management paradigm and the benefits of good...... land governance to society. A land administration system provides a country with the infrastructure to implement land-related policies and land management strategies. By applying this land management profile to surveying education, this paper suggests that there is a need to move away from an exclusive...... engineering focus toward adopting an interdisciplinary and problem-based approach to ensure that academic programmes can cope with the wide range of land administration functions and challenges. An interdisciplinary approach to surveying education calls for the need to address issues and problems in a real...

  8. Setting research priorities by applying the combined approach matrix.

    Science.gov (United States)

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  9. Applying a Modified Triad Approach to Investigate Wastewater lines

    International Nuclear Information System (INIS)

    Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.

    2006-01-01

    Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)

  10. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  11. A new kinetic biphasic approach applied to biodiesel process intensification

    Energy Technology Data Exchange (ETDEWEB)

    Russo, V.; Tesser, R.; Di Serio, M.; Santacesaria, E. [Naples Univ. (Italy). Dept. of Chemistry

    2012-07-01

    Many different papers have been published on the kinetics of the transesterification of vegetable oil with methanol, in the presence of alkaline catalysts to produce biodiesel. All the proposed approaches are based on the assumption of a pseudo-monophasic system. The consequence of these approaches is that some experimental aspects cannot be described. For the reaction performed in batch conditions, for example, the monophasic approach is not able to reproduce the different plateau obtained by using different amount of catalyst or the induction time observed at low stirring rates. Moreover, it has been observed by operating in continuous reactors that micromixing has a dramatic effect on the reaction rate. At this purpose, we have recently observed that is possible to obtain a complete conversion to biodiesel in less than 10 seconds of reaction time. This observation is confirmed also by other authors using different types of reactors like: static mixers, micro-reactors, oscillatory flow reactors, cavitational reactors, microwave reactors or centrifugal contactors. In this work we will show that a recently proposed biphasic kinetic approach is able to describe all the aspects before mentioned that cannot be described by the monophasic kinetic model. In particular, we will show that the biphasic kinetic model can describe both the induction time observed in the batch reactors, at low stirring rate, and the very high conversions obtainable in a micro-channel reactor. The adopted biphasic kinetic model is based on a reliable reaction mechanism that will be validated by the experimental evidences reported in this work. (orig.)

  12. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  13. Eliciting expert opinion for economic models: an applied example.

    Science.gov (United States)

    Leal, José; Wordsworth, Sarah; Legood, Rosa; Blair, Edward

    2007-01-01

    Expert opinion is considered as a legitimate source of information for decision-analytic modeling where required data are unavailable. Our objective was to develop a practical computer-based tool for eliciting expert opinion about the shape of the uncertainty distribution around individual model parameters. We first developed a prepilot survey with departmental colleagues to test a number of alternative approaches to eliciting opinions on the shape of the uncertainty distribution around individual parameters. This information was used to develop a survey instrument for an applied clinical example. This involved eliciting opinions from experts to inform a number of parameters involving Bernoulli processes in an economic model evaluating DNA testing for families with a genetic disease, hypertrophic cardiomyopathy. The experts were cardiologists, clinical geneticists, and laboratory scientists working with cardiomyopathy patient populations and DNA testing. Our initial prepilot work suggested that the more complex elicitation techniques advocated in the literature were difficult to use in practice. In contrast, our approach achieved a reasonable response rate (50%), provided logical answers, and was generally rated as easy to use by respondents. The computer software user interface permitted graphical feedback throughout the elicitation process. The distributions obtained were incorporated into the model, enabling the use of probabilistic sensitivity analysis. There is clearly a gap in the literature between theoretical elicitation techniques and tools that can be used in applied decision-analytic models. The results of this methodological study are potentially valuable for other decision analysts deriving expert opinion.

  14. Molecular modeling: An open invitation for applied mathematics

    Science.gov (United States)

    Mezey, Paul G.

    2013-10-01

    Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.

  15. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  16. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  17. Applying a new ensemble approach to estimating stock status of marine fisheries around the world

    DEFF Research Database (Denmark)

    Rosenberg, Andrew A.; Kleisner, Kristin M.; Afflerbach, Jamie

    2018-01-01

    The exploitation status of marine fisheries stocks worldwide is of critical importance for food security, ecosystem conservation, and fishery sustainability. Applying a suite of data-limited methods to global catch data, combined through an ensemble modeling approach, we provide quantitative esti...

  18. Geothermal potential assessment for a low carbon strategy : A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  19. Geothermal potential assessment for a low carbon strategy: A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M.P.D.; Santilano, A.; Wees, J.D. van; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  20. The applying stakeholder approach to strategic management of territories development

    Directory of Open Access Journals (Sweden)

    Ilshat Azamatovich Tazhitdinov

    2013-06-01

    Full Text Available In the paper, the aspects of the strategic management of socioeconomic development of territories in terms of stakeholder approach are discussed. The author's interpretation of the concept of stakeholder sub-region is proposed, and their classification into internal and external to the territorial socioeconomic system of sub-regional level is offered. The types of interests and types of resources stakeholders in the sub-region are identified, and at the same time the correlation of interests and resources allows to determine the groups (alliances stakeholders, which ensure the balance of interests depending on the certain objectives of the association. The conceptual stakeholder agent model of management of strategic territorial development within the hierarchical system of «region — sub-region — municipal formation,» is proposed. All stakeholders there are considered as the influence agents directing its own resources to provide a comprehensive approach to management territorial development. The interaction between all the influence agents of the «Region — Sub-region — municipal formation» is provided vertically and horizontally through the initialization of the development and implementation of strategic documents of the sub-region. Vertical interaction occurs between stakeholders such as government and municipal authorities being as a guideline, and the horizontal — between the rests of them being as a partnership. Within the proposed model, the concurrent engineering is implemented, which is a form of inter-municipal strategic cooperation of local government municipalities for the formation and analyzing a set of alternatives of the project activities in the sub-region in order to choose the best options. The proposed approach was tested in the development of medium-term comprehensive program of socioeconomic development of the Zauralye and sub-regions of the North-East of the Republic of Bashkortostan (2011–2015.

  1. Agent-Based Modelling applied to 5D model of the HIV infection

    Directory of Open Access Journals (Sweden)

    Toufik Laroum

    2016-12-01

    The simplest model was the 3D mathematical model. But the complexity of this phenomenon and the diversity of cells and actors which affect its evolution requires the use of new approaches such as multi-agents approach that we have applied in this paper. The results of our simulator on the 5D model are promising because they are consistent with biological knowledge’s. Therefore, the proposed approach is well appropriate to the study of population dynamics in general and could help to understand and predict the dynamics of HIV infection.

  2. Mathematical Modeling Applied to Maritime Security

    OpenAIRE

    Center for Homeland Defense and Security

    2010-01-01

    Center for Homeland Defense and Security, OUT OF THE CLASSROOM Download the paper: Layered Defense: Modeling Terrorist Transfer Threat Networks and Optimizing Network Risk Reduction” Students in Ted Lewis’ Critical Infrastructure Protection course are taught how mathematic modeling can provide...

  3. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  4. Applied Geography Internships: Operational Canadian Models.

    Science.gov (United States)

    Foster, L. T.

    1982-01-01

    Anxious to maintain student enrollments, geography departments have placed greater emphasis on the applied nature of the discipline. Described are (1) the advantages of internships in college geography curricula that enable students to gain firsthand knowledge about the usefulness of geography in real world situations and (2) operational models…

  5. Applied Creativity: The Creative Marketing Breakthrough Model

    Science.gov (United States)

    Titus, Philip A.

    2007-01-01

    Despite the increasing importance of personal creativity in today's business environment, few conceptual creativity frameworks have been presented in the marketing education literature. The purpose of this article is to advance the integration of creativity instruction into marketing classrooms by presenting an applied creative marketing…

  6. A Log Logistic Survival Model Applied to Hypobaric Decompression Sickness

    Science.gov (United States)

    Conkin, Johnny

    2001-01-01

    Decompression sickness (DCS) is a complex, multivariable problem. A mathematical description or model of the likelihood of DCS requires a large amount of quality research data, ideas on how to define a decompression dose using physical and physiological variables, and an appropriate analytical approach. It also requires a high-performance computer with specialized software. I have used published DCS data to develop my decompression doses, which are variants of equilibrium expressions for evolved gas plus other explanatory variables. My analytical approach is survival analysis, where the time of DCS occurrence is modeled. My conclusions can be applied to simple hypobaric decompressions - ascents lasting from 5 to 30 minutes - and, after minutes to hours, to denitrogenation (prebreathing). They are also applicable to long or short exposures, and can be used whether the sufferer of DCS is at rest or exercising at altitude. Ultimately I would like my models to be applied to astronauts to reduce the risk of DCS during spacewalks, as well as to future spaceflight crews on the Moon and Mars.

  7. Views on Montessori Approach by Teachers Serving at Schools Applying the Montessori Approach

    Science.gov (United States)

    Atli, Sibel; Korkmaz, A. Merve; Tastepe, Taskin; Koksal Akyol, Aysel

    2016-01-01

    Problem Statement: Further studies on Montessori teachers are required on the grounds that the Montessori approach, which, having been applied throughout the world, holds an important place in the alternative education field. Yet it is novel for Turkey, and there are only a limited number of studies on Montessori teachers in Turkey. Purpose of…

  8. Fractional calculus model of electrical impedance applied to human skin.

    Science.gov (United States)

    Vosika, Zoran B; Lazovic, Goran M; Misevic, Gradimir N; Simic-Krstic, Jovana B

    2013-01-01

    Fractional calculus is a mathematical approach dealing with derivatives and integrals of arbitrary and complex orders. Therefore, it adds a new dimension to understand and describe basic nature and behavior of complex systems in an improved way. Here we use the fractional calculus for modeling electrical properties of biological systems. We derived a new class of generalized models for electrical impedance and applied them to human skin by experimental data fitting. The primary model introduces new generalizations of: 1) Weyl fractional derivative operator, 2) Cole equation, and 3) Constant Phase Element (CPE). These generalizations were described by the novel equation which presented parameter [Formula: see text] related to remnant memory and corrected four essential parameters [Formula: see text] We further generalized single generalized element by introducing specific partial sum of Maclaurin series determined by parameters [Formula: see text] We defined individual primary model elements and their serial combination models by the appropriate equations and electrical schemes. Cole equation is a special case of our generalized class of models for[Formula: see text] Previous bioimpedance data analyses of living systems using basic Cole and serial Cole models show significant imprecisions. Our new class of models considerably improves the quality of fitting, evaluated by mean square errors, for bioimpedance data obtained from human skin. Our models with new parameters presented in specific partial sum of Maclaurin series also extend representation, understanding and description of complex systems electrical properties in terms of remnant memory effects.

  9. Fractional calculus model of electrical impedance applied to human skin.

    Directory of Open Access Journals (Sweden)

    Zoran B Vosika

    Full Text Available Fractional calculus is a mathematical approach dealing with derivatives and integrals of arbitrary and complex orders. Therefore, it adds a new dimension to understand and describe basic nature and behavior of complex systems in an improved way. Here we use the fractional calculus for modeling electrical properties of biological systems. We derived a new class of generalized models for electrical impedance and applied them to human skin by experimental data fitting. The primary model introduces new generalizations of: 1 Weyl fractional derivative operator, 2 Cole equation, and 3 Constant Phase Element (CPE. These generalizations were described by the novel equation which presented parameter [Formula: see text] related to remnant memory and corrected four essential parameters [Formula: see text] We further generalized single generalized element by introducing specific partial sum of Maclaurin series determined by parameters [Formula: see text] We defined individual primary model elements and their serial combination models by the appropriate equations and electrical schemes. Cole equation is a special case of our generalized class of models for[Formula: see text] Previous bioimpedance data analyses of living systems using basic Cole and serial Cole models show significant imprecisions. Our new class of models considerably improves the quality of fitting, evaluated by mean square errors, for bioimpedance data obtained from human skin. Our models with new parameters presented in specific partial sum of Maclaurin series also extend representation, understanding and description of complex systems electrical properties in terms of remnant memory effects.

  10. Apply Functional Modelling to Consequence Analysis in Supervision Systems

    DEFF Research Database (Denmark)

    Zhang, Xinxin; Lind, Morten; Gola, Giulio

    2013-01-01

    This paper will first present the purpose and goals of applying functional modelling approach to consequence analysis by adopting Multilevel Flow Modelling (MFM). MFM Models describe a complex system in multiple abstraction levels in both means-end dimension and whole-part dimension. It contains...... consequence analysis to practical or online applications in supervision systems. It will also suggest a multiagent solution as the integration architecture for developing tools to facilitate the utilization results of functional consequence analysis. Finally a prototype of the multiagent reasoning system...... causal relations between functions and goals. A rule base system can be developed to trace the causal relations and perform consequence propagations. This paper will illustrate how to use MFM for consequence reasoning by using rule base technology and describe the challenges for integrating functional...

  11. Applying Probabilistic Decision Models to Clinical Trial Design

    Science.gov (United States)

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance.

  12. Applying Mathematical Models to Surgical Patient Planning

    NARCIS (Netherlands)

    J.M. van Oostrum (Jeroen)

    2009-01-01

    textabstractOn a daily basis surgeons, nurses, and managers face cancellation of surgery, peak demands on wards, and overtime in operating rooms. Moreover, the lack of an integral planning approach for operating rooms, wards, and intensive care units causes low resource utilization and makes patient

  13. A suggested approach toward measuring sorption and applying sorption data to repository performance assessment

    International Nuclear Information System (INIS)

    Rundberg, R.S.

    1992-01-01

    The prediction of radionuclide migration for the purpose of assessing the safety of a nuclear waste repository will be based on a collective knowledge of hydrologic and geochemical properties of the surrounding rock and groundwater. This knowledge along with assumption about the interactions of radionuclides with groundwater and minerals form the scientific basis for a model capable of accurately predicting the repository's performance. Because the interaction of radionuclides in geochemical systems is known to be complicated, several fundamental and empirical approaches to measuring the interaction between radionuclides and the geologic barrier have been developed. The approaches applied to the measurement of sorption involve the use of pure minerals, intact, or crushed rock in dynamic and static experiments. Each approach has its advantages and disadvantages. There is no single best method for providing sorption data for performance assessment models which can be applied without invoking information derived from multiple experiments. 53 refs., 12 figs

  14. Applying Olap Model On Public Finance Management

    OpenAIRE

    Dorde Pavlovic; Branko Gledovic

    2011-01-01

    Budget control is derivate from one of the main functions of budget, that aims that the budget is control instrument of acquiring and pending of budget needs. OLAP model represents an instrument that finds its place in the budget planning process, executive phases of budget, accountancy, etc. There is a direct correlation between the OLAP model and public finance management process.

  15. Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2008-06-01

    Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.

  16. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  17. Mouse genetic approaches applied to the normal tissue radiation response

    International Nuclear Information System (INIS)

    Haston, Christina K.

    2012-01-01

    The varying responses of inbred mouse models to radiation exposure present a unique opportunity to dissect the genetic basis of radiation sensitivity and tissue injury. Such studies are complementary to human association studies as they permit both the analysis of clinical features of disease, and of specific variants associated with its presentation, in a controlled environment. Herein I review how animal models are studied to identify specific genetic variants influencing predisposition to radiation-induced traits. Among these radiation-induced responses are documented strain differences in repair of DNA damage and in extent of tissue injury (in the lung, skin, and intestine) which form the base for genetic investigations. For example, radiation-induced DNA damage is consistently greater in tissues from BALB/cJ mice, than the levels in C57BL/6J mice, suggesting there may be an inherent DNA damage level per strain. Regarding tissue injury, strain specific inflammatory and fibrotic phenotypes have been documented for principally, C57BL/6 C3H and A/J mice but a correlation among responses such that knowledge of the radiation injury in one tissue informs of the response in another is not evident. Strategies to identify genetic differences contributing to a trait based on inbred strain differences, which include linkage analysis and the evaluation of recombinant congenic (RC) strains, are presented, with a focus on the lung response to irradiation which is the only radiation-induced tissue injury mapped to date. Such approaches are needed to reveal genetic differences in susceptibility to radiation injury, and also to provide a context for the effects of specific genetic variation uncovered in anticipated clinical association studies. In summary, mouse models can be studied to uncover heritable variation predisposing to specific radiation responses, and such variations may point to pathways of importance to phenotype development in the clinic.

  18. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  19. Uncertainty models applied to the substation planning

    Energy Technology Data Exchange (ETDEWEB)

    Fontoura Filho, Roberto N [ELETROBRAS, Rio de Janeiro, RJ (Brazil); Aires, Joao Carlos O; Tortelly, Debora L.S. [Light Servicos de Eletricidade S.A., Rio de Janeiro, RJ (Brazil)

    1994-12-31

    The selection of the reinforcements for a power system expansion becomes a difficult task on an environment of uncertainties. These uncertainties can be classified according to their sources as exogenous and endogenous. The first one is associated to the elements of the generation, transmission and distribution systems. The exogenous uncertainly is associated to external aspects, as the financial resources, the time spent to build the installations, the equipment price and the load level. The load uncertainly is extremely sensible to the behaviour of the economic conditions. Although the impossibility to take out completely the uncertainty , the endogenous one can be convenient treated and the exogenous uncertainly can be compensated. This paper describes an uncertainty treatment methodology and a practical application to a group of substations belonging to LIGHT company, the Rio de Janeiro electric utility. The equipment performance uncertainty is treated by adopting a probabilistic approach. The uncertainly associated to the load increase is considered by using technical analysis of scenarios and choice criteria based on the Decision Theory. On this paper it was used the Savage Method and the Fuzzy Set Method, in order to select the best middle term reinforcements plan. (author) 7 refs., 4 figs., 6 tabs.

  20. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  1. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  2. A variational approach to chiral quark models

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.

    1987-01-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)

  3. Private healthcare quality: applying a SERVQUAL model.

    Science.gov (United States)

    Butt, Mohsin Muhammad; de Run, Ernest Cyril

    2010-01-01

    This paper seeks to develop and test the SERVQUAL model scale for measuring Malaysian private health service quality. The study consists of 340 randomly selected participants visiting a private healthcare facility during a three-month data collection period. Data were analyzed using means, correlations, principal component and confirmatory factor analysis to establish the modified SERVQUAL scale's reliability, underlying dimensionality and convergent, discriminant validity. Results indicate a moderate negative quality gap for overall Malaysian private healthcare service quality. Results also indicate a moderate negative quality gap on each service quality scale dimension. However, scale development analysis yielded excellent results, which can be used in wider healthcare policy and practice. Respondents were skewed towards a younger population, causing concern that the results might not represent all Malaysian age groups. The study's major contribution is that it offers a way to assess private healthcare service quality. Second, it successfully develops a scale that can be used to measure health service quality in Malaysian contexts.

  4. Towards a capability approach to careers: Applying Amartya Sen's thinking

    OpenAIRE

    Robertson, Peter.

    2015-01-01

    Amartya Sen’s capability approach characterizes an individual’s well-being in terms of what they are able to be, and what they are able to do. This framework for thinking has many commonalities with the core ideas in career guidance. Sen’s approach is abstract and not in itself a complete or explanatory theory, but a case can be made that the capability approach has something to offer career theory when combined with a life-career developmental approach. It may also suggest ways of working th...

  5. Experimental designs for autoregressive models applied to industrial maintenance

    International Nuclear Information System (INIS)

    Amo-Salas, M.; López-Fidalgo, J.; Pedregal, D.J.

    2015-01-01

    Some time series applications require data which are either expensive or technically difficult to obtain. In such cases scheduling the points in time at which the information should be collected is of paramount importance in order to optimize the resources available. In this paper time series models are studied from a new perspective, consisting in the use of Optimal Experimental Design setup to obtain the best times to take measurements, with the principal aim of saving costs or discarding useless information. The model and the covariance function are expressed in an explicit form to apply the usual techniques of Optimal Experimental Design. Optimal designs for various approaches are computed and their efficiencies are compared. The methods working in an application of industrial maintenance of a critical piece of equipment at a petrochemical plant are shown. This simple model allows explicit calculations in order to show openly the procedure to find the correlation structure, needed for computing the optimal experimental design. In this sense the techniques used in this paper to compute optimal designs may be transferred to other situations following the ideas of the paper, but taking into account the increasing difficulty of the procedure for more complex models. - Highlights: • Optimal experimental design theory is applied to AR models to reduce costs. • The first observation has an important impact on any optimal design. • Either the lack of precision or small starting observations claim for large times. • Reasonable optimal times were obtained relaxing slightly the efficiency. • Optimal designs were computed in a predictive maintenance context

  6. An Applied Project-Driven Approach to Undergraduate Research Experiences

    Science.gov (United States)

    Karls, Michael A.

    2017-01-01

    In this paper I will outline the process I have developed for conducting applied mathematics research with undergraduates and give some examples of the projects we have worked on. Several of these projects have led to refereed publications that could be used to illustrate topics taught in the undergraduate curriculum.

  7. Risk matrix model applied to the outsourcing of logistics' activities

    Directory of Open Access Journals (Sweden)

    Fouad Jawab

    2015-09-01

    Full Text Available Purpose: This paper proposes the application of the risk matrix model in the field of logistics outsourcing. Such an application can serve as the basis for decision making regarding the conduct of a risk management in the logistics outsourcing process and allow its prevention. Design/methodology/approach: This study is based on the risk management of logistics outsourcing in the field of the retail sector in Morocco. The authors identify all possible risks and then classify and prioritize them using the Risk Matrix Model. Finally, we have come to four possible decisions for the identified risks. The analysis was made possible through interviews and discussions with the heads of departments and agents who are directly involved in each outsourced activity. Findings and Originality/value: It is possible to improve the risk matrix model by proposing more personalized prevention measures according to each company that operates in the mass-market retailing. Originality/value: This study is the only one made in the process of logistics outsourcing in the retail sector in Morocco through Label’vie as a case study. First, we had identified as thorough as we could all possible risks, then we applied the Risk Matrix Model to sort them out in an ascending order of importance and criticality. As a result, we could hand out to the decision-makers the mapping for an effective control of risks and a better guiding of the process of risk management.

  8. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  9. Learning to Apply Models of Materials While Explaining Their Properties

    Science.gov (United States)

    Karpin, Tiia; Juuti, Kalle; Lavonen, Jari

    2014-01-01

    Background: Applying structural models is important to chemistry education at the upper secondary level, but it is considered one of the most difficult topics to learn. Purpose: This study analyses to what extent in designed lessons students learned to apply structural models in explaining the properties and behaviours of various materials.…

  10. Applied approach slab settlement research, design/construction : final report.

    Science.gov (United States)

    2013-08-01

    Approach embankment settlement is a pervasive problem in Oklahoma and many other states. The bump and/or abrupt slope change poses a danger to traffic and can cause increased dynamic loads on the bridge. Frequent and costly maintenance may be needed ...

  11. Major accident prevention through applying safety knowledge management approach.

    Science.gov (United States)

    Kalatpour, Omid

    2016-01-01

    Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.

  12. A Multi-Criterion Evolutionary Approach Applied to Phylogenetic Reconstruction

    OpenAIRE

    Cancino, W.; Delbem, A.C.B.

    2010-01-01

    In this paper, we proposed an MOEA approach, called PhyloMOEA which solves the phylogenetic inference problem using maximum parsimony and maximum likelihood criteria. The PhyloMOEA's development was motivated by several studies in the literature (Huelsenbeck, 1995; Jin & Nei, 1990; Kuhner & Felsenstein, 1994; Tateno et al., 1994), which point out that various phylogenetic inference methods lead to inconsistent solutions. Techniques using parsimony and likelihood criteria yield to different tr...

  13. Applying a Common-Sense Approach to Fighting Obesity

    Directory of Open Access Journals (Sweden)

    Jessica Y. Breland

    2012-01-01

    Full Text Available The obesity epidemic is a threat to the health of millions and to the economic viability of healthcare systems, governments, businesses, and nations. A range of answers come to mind if and when we ask, “What can we, health professionals (physicians, nurses, nutritionists, behavioral psychologists, do about this epidemic?” In this paper, we describe the Common-Sense Model of Self-Regulation as a framework for organizing existent tools and creating new tools to improve control of the obesity epidemic. Further, we explain how the Common-Sense Model can augment existing behavior-change models, with particular attention to the strength of the Common-Sense Model in addressing assessment and weight maintenance beyond initial weight loss.

  14. Applying Petri nets in modelling the human factor

    International Nuclear Information System (INIS)

    Bedreaga, Luminita; Constntinescu, Cristina; Guzun, Basarab

    2007-01-01

    Usually, in the reliability analysis performed for complex systems, we determine the success probability to work with other performance indices, i.e. the likelihood associated with a given state. The possible values assigned to system states can be derived using inductive methods. If one wants to calculate the probability to occur a particular event in the system, then deductive methods should be applied. In the particular case of the human reliability analysis, as part of probabilistic safety analysis, the international regulatory commission have developed specific guides and procedures to perform such assessments. The paper presents the modality to obtain the human reliability quantification using the Petri nets approach. This is an efficient means to assess reliability systems because of their specific features. The examples showed in the paper are from human reliability documentation without a detailed human factor analysis (qualitative). We present human action modelling using event trees and Petri nets approach. The obtained results by these two kinds of methods are in good concordance. (authors)

  15. Applying open source innovation approaches in developing business innovation

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2015-01-01

    and managed effectively in developing business model innovation. The aim of this paper is therefore to close this research gap and to provide new knowledge within the research field of OI and OI applications. Thus, in the present study we explore the facilitation and management of open source innovation...... in developing business model innovation in the context of an international OI contest across five international case companies. The findings reveal six categories of key antecedents in effective facilitation and management of OI in developing business model innovation.......More and more companies are pursuing continuous innovation through different types of open source innovation and across different partners. The growing interest in open innovation (OI) originates both from the academic community as well as amongst practitioners motivating further investigation...

  16. Cortical complexity in bipolar disorder applying a spherical harmonics approach.

    Science.gov (United States)

    Nenadic, Igor; Yotter, Rachel A; Dietzek, Maren; Langbein, Kerstin; Sauer, Heinrich; Gaser, Christian

    2017-05-30

    Recent studies using surface-based morphometry of structural magnetic resonance imaging data have suggested that some changes in bipolar disorder (BP) might be neurodevelopmental in origin. We applied a novel analysis of cortical complexity based on fractal dimensions in high-resolution structural MRI scans of 18 bipolar disorder patients and 26 healthy controls. Our region-of-interest based analysis revealed increases in fractal dimensions (in patients relative to controls) in left lateral orbitofrontal cortex and right precuneus, and decreases in right caudal middle frontal, entorhinal cortex, and right pars orbitalis, and left fusiform and posterior cingulate cortices. While our analysis is preliminary, it suggests that early neurodevelopmental pathologies might contribute to bipolar disorder, possibly through genetic mechanisms. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  17. An extended risk assessment approach for chemical plants applied to a study related to pipe ruptures

    International Nuclear Information System (INIS)

    Milazzo, Maria Francesca; Aven, Terje

    2012-01-01

    Risk assessments and Quantitative Risk Assessment (QRA) in particular have been used in the chemical industry for many years to support decision-making on the choice of arrangements and measures associated with chemical processes, transportation and storage of dangerous substances. The assessments have been founded on a risk perspective seeing risk as a function of frequency of events (probability) and associated consequences. In this paper we point to the need for extending this approach to place a stronger emphasis on uncertainties. A recently developed risk framework designed to better reflect such uncertainties is presented and applied to a chemical plant and specifically the analysis of accidental events related to the rupture of pipes. Two different ways of implementing the framework are presented, one based on the introduction of probability models and one without. The differences between the standard approach and the extended approaches are discussed from a theoretical point of view as well as from a practical risk analyst perspective.

  18. Undiscovered resource evaluation: Towards applying a systematic approach to uranium

    International Nuclear Information System (INIS)

    Fairclough, M.; Katona, L.

    2014-01-01

    Evaluations of potential mineral resource supply range from spatial to aspatial, and everything in between across a range of scales. They also range from qualitative to quantitative with similar hybrid examples across the spectrum. These can compromise detailed deposit-specific reserve and resource calculations, target generative processes and estimates of potential endowments in a broad geographic or geological area. All are estimates until the ore has been discovered and extracted. Contemporary national or provincial scale evaluations of mineral potential are relatively advanced and some include uranium, such as those for South Australia undertaken by the State Geological Survey. These play an important role in land-use planning as well as attracting exploration investment and range from datato knowledge-driven approaches. Studies have been undertaken for the Mt Painter region, as well as for adjacent basins. The process of estimating large-scale potential mineral endowments is critical for national and international planning purposes but is a relatively recent and less common undertaking. In many cases, except at a general level, the data and knowledge for a relatively immature terrain is lacking, requiring assessment by analogy with other areas. Commencing in the 1980s, the United States Geological Survey, and subsequently the Geological Survey of Canada evaluated a range of commodities ranging from copper to hydrocarbons with a view to security of supply. They developed innovative approaches to, as far as practical, reduce the uncertainty and maximise the reproducibility of the calculations in information-poor regions. Yet the approach to uranium was relatively ad hoc and incomplete (such as the US Department of Energy NURE project). Other historic attempts, such as the IAEA-NEA International Uranium Resource Evaluation Project (IUREP) in the 1970s, were mainly qualitative. While there is still no systematic global evaluation of undiscovered uranium resources

  19. Sn approach applied to the solution of transport equation

    International Nuclear Information System (INIS)

    Lopes, J.P.

    1973-09-01

    In this work the origin of the Transport Theory is considered and the Transport Equation for the movement of the neutron in a system is established in its more general form, using the laws of nuclear physics. This equation is used as the starting point for development, under adequate assumptions, of simpler models that render the problem suitable for numerical solution. Representation of this model in different geometries is presented. The different processes of nuclear physics are introduced briefly and discussed. In addition, the boundary conditions for the different cases and a general procedure for the application of the Conservation Law are stated. The last chapter deals specifically with the S n method, its development, definitions and generalities. Computational schemes for obtaining the S n solution in spherical and cylindrical geometry, and convergence acceleration methods are also developed. (author)

  20. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  1. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  2. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan

    2013-01-01

    The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results...

  3. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  4. An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis

    Science.gov (United States)

    2013-03-01

    AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks

  5. Quantum particle swarm approaches applied to combinatorial problems

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos S.; Schirru, Roberto; Lima, Alan M.M. de, E-mail: andressa@lmp.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    Quantum Particle Swarm Optimization (QPSO) is a global convergence algorithm that combines the classical PSO philosophy and quantum mechanics to improve performance of PSO. Different from PSO it only has the 'measurement' of the position equation for all particles. The process of 'measurement' in quantum mechanics, obey classic laws while the particle itself follows the quantum rules. QPSO works like PSO in search ability but has fewer parameters control. In order to improve the QPSO performance, some strategies have been proposed in the literature. Weighted QPSO (WQPSO) is a version of QPSO, where weight parameter is insert in the calculation of the balance between the global and local searching of the algorithm. It has been shown to perform well in finding the optimal solutions for many optimization problems. In this article random confinement was introduced in WQPSO. The WQPSO with random confinement was tested in two combinatorial problems. First, we execute the model on Travelling Salesman Problem (TSP) to find the parameters' values resulting in good solutions in general. Finally, the model was tested on Nuclear Reactor Reload Problem, and the performance was compared with QPSO standard. (author)

  6. Quantum particle swarm approaches applied to combinatorial problems

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos S.; Schirru, Roberto; Lima, Alan M.M. de

    2017-01-01

    Quantum Particle Swarm Optimization (QPSO) is a global convergence algorithm that combines the classical PSO philosophy and quantum mechanics to improve performance of PSO. Different from PSO it only has the 'measurement' of the position equation for all particles. The process of 'measurement' in quantum mechanics, obey classic laws while the particle itself follows the quantum rules. QPSO works like PSO in search ability but has fewer parameters control. In order to improve the QPSO performance, some strategies have been proposed in the literature. Weighted QPSO (WQPSO) is a version of QPSO, where weight parameter is insert in the calculation of the balance between the global and local searching of the algorithm. It has been shown to perform well in finding the optimal solutions for many optimization problems. In this article random confinement was introduced in WQPSO. The WQPSO with random confinement was tested in two combinatorial problems. First, we execute the model on Travelling Salesman Problem (TSP) to find the parameters' values resulting in good solutions in general. Finally, the model was tested on Nuclear Reactor Reload Problem, and the performance was compared with QPSO standard. (author)

  7. The Limitations of Applying Rational Decision-Making Models

    African Journals Online (AJOL)

    decision-making models as applied to child spacing and more. specificaDy to the use .... also assumes that the individual operates as a rational decision- making organism in ..... work involves: Motivation; Counselling; Distribution ofIEC mate-.

  8. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  9. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguia

    Full Text Available In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  10. The effective action approach applied to nuclear matter (1)

    International Nuclear Information System (INIS)

    Tran Huu Phat; Nguyen Tuan Anh.

    1996-11-01

    Within the framework of the Walecka model (QHD-I) the application of the Cornwall-Jackiw-Tomboulis (CJT) effective action to nuclear matter is presented. The main feature is the treating of the meson condensates for the system of finite nuclear density. The system of couple Schwinger-Dyson (SD) equations is derived. It is shown that SD equations for sigma-omega mixings are absent in this formalism. Instead, the energy density of the nuclear ground state does explicitly contain the contributions from the ring diagrams, amongst others. In the bare-vertex approximation, the expression for energy density is written down for numerical computation in the next paper. (author). 14 refs, 3 figs

  11. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  12. Semantic Approaches Applied to Scientific Ocean Drilling Data

    Science.gov (United States)

    Fils, D.; Jenkins, C. J.; Arko, R. A.

    2012-12-01

    The application of Linked Open Data methods to 40 years of data from scientific ocean drilling is providing users with several new methods for rich-content data search and discovery. Data from the Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP) and Integrated Ocean Drilling Program (IODP) have been translated and placed in RDF triple stores to provide access via SPARQL, linked open data patterns, and by embedded structured data through schema.org / RDFa. Existing search services have been re-encoded in this environment which allows the new and established architectures to be contrasted. Vocabularies including computed semantic relations between concepts, allow separate but related data sets to be connected on their concepts and resources even when they are expressed somewhat differently. Scientific ocean drilling produces a wide range of data types and data sets: borehole logging file-based data, images, measurements, visual observations and the physical sample data. The steps involved in connecting these data to concepts using vocabularies will be presented, including the connection of data sets through Vocabulary of Interlinked Datasets (VoID) and open entity collections such as Freebase and dbPedia. Demonstrated examples will include: (i) using RDF Schema for inferencing and in federated searches across NGDC and IODP data, (ii) using structured data in the data.oceandrilling.org web site, (iii) association through semantic methods of age models and depth recorded data to facilitate age based searches for data recorded by depth only.

  13. Applying the health action process approach (HAPA) to the choice of health products: An exploratory study

    DEFF Research Database (Denmark)

    Krutulyte, Rasa; Grunert, Klaus G.; Scholderer, Joachim

    This paper presents the results of a qualitative pilot study that aimed to uncovering Danish consumers' motives for choosing health food. Schwarzer's (1992) health action process approach (HAPA) was applied to understand the process by which people chose health products. The research focused...... on the role of the behavioural intention predictors such as risk perception, outcome expectations and self-efficacy. The model has been proved to be a useful framework for understanding consumer choosing health food and is substantial in the further application of dietary choice issues....

  14. Comparison of two multiaxial fatigue models applied to dental implants

    Directory of Open Access Journals (Sweden)

    JM. Ayllon

    2015-07-01

    Full Text Available This paper presents two multiaxial fatigue life prediction models applied to a commercial dental implant. One model is called Variable Initiation Length Model and takes into account both the crack initiation and propagation phases. The second model combines the Theory of Critical Distance with a critical plane damage model to characterise the initiation and initial propagation of micro/meso cracks in the material. This paper discusses which material properties are necessary for the implementation of these models and how to obtain them in the laboratory from simple test specimens. It also describes the FE models developed for the stress/strain and stress intensity factor characterisation in the implant. The results of applying both life prediction models are compared with experimental results arising from the application of ISO-14801 standard to a commercial dental implant.

  15. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  16. An extended gravity model with substitution applied to international trade

    NARCIS (Netherlands)

    Bikker, J.A.|info:eu-repo/dai/nl/06912261X

    The traditional gravity model has been applied many times to international trade flows, especially in order to analyze trade creation and trade diversion. However, there are two fundamental objections to the model: it cannot describe substitutions between flows and it lacks a cogent theoretical

  17. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  18. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  19. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  20. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  1. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  2. Analytic model of Applied-B ion diode impedance behavior

    International Nuclear Information System (INIS)

    Miller, P.A.; Mendel, C.W. Jr.

    1987-01-01

    An empirical analysis of impedance data from Applied-B ion diodes used in seven inertial confinement fusion research experiments was published recently. The diodes all operated with impedance values well below the Child's-law value. The analysis uncovered an unusual unifying relationship among data from the different experiments. The analysis suggested that closure of the anode-cathode gap by electrode plasma was not a dominant factor in the experiments, but was not able to elaborate the underlying physics. Here we present a new analytic model of Applied-B ion diodes coupled to accelerators. A critical feature of the diode model is based on magnetic insulation theory. The model successfully describes impedance behavior of these diodes and supports stimulating new viewpoints of the physics of Applied-B ion diode operation

  3. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  4. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  5. Cellular Automata Models Applied to the Study of Landslide Dynamics

    Science.gov (United States)

    Liucci, Luisa; Melelli, Laura; Suteanu, Cristian

    2015-04-01

    Landslides are caused by complex processes controlled by the interaction of numerous factors. Increasing efforts are being made to understand the spatial and temporal evolution of this phenomenon, and the use of remote sensing data is making significant contributions in improving forecast. This paper studies landslides seen as complex dynamic systems, in order to investigate their potential Self Organized Critical (SOC) behavior, and in particular, scale-invariant aspects of processes governing the spatial development of landslides and their temporal evolution, as well as the mechanisms involved in driving the system and keeping it in a critical state. For this purpose, we build Cellular Automata Models, which have been shown to be capable of reproducing the complexity of real world features using a small number of variables and simple rules, thus allowing for the reduction of the number of input parameters commonly used in the study of processes governing landslide evolution, such as those linked to the geomechanical properties of soils. This type of models has already been successfully applied in studying the dynamics of other natural hazards, such as earthquakes and forest fires. The basic structure of the model is composed of three modules: (i) An initialization module, which defines the topographic surface at time zero as a grid of square cells, each described by an altitude value; the surface is acquired from real Digital Elevation Models (DEMs). (ii) A transition function, which defines the rules used by the model to update the state of the system at each iteration. The rules use a stability criterion based on the slope angle and introduce a variable describing the weakening of the material over time, caused for example by rainfall. The weakening brings some sites of the system out of equilibrium thus causing the triggering of landslides, which propagate within the system through local interactions between neighboring cells. By using different rates of

  6. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  7. Applying different quality and safety models in healthcare improvement work: Boundary objects and system thinking

    International Nuclear Information System (INIS)

    Wiig, Siri; Robert, Glenn; Anderson, Janet E.; Pietikainen, Elina; Reiman, Teemu; Macchi, Luigi; Aase, Karina

    2014-01-01

    A number of theoretical models can be applied to help guide quality improvement and patient safety interventions in hospitals. However there are often significant differences between such models and, therefore, their potential contribution when applied in diverse contexts. The aim of this paper is to explore how two such models have been applied by hospitals to improve quality and safety. We describe and compare the models: (1) The Organizing for Quality (OQ) model, and (2) the Design for Integrated Safety Culture (DISC) model. We analyze the theoretical foundations of the models, and show, by using a retrospective comparative case study approach from two European hospitals, how these models have been applied to improve quality and safety. The analysis shows that differences appear in the theoretical foundations, practical approaches and applications of the models. Nevertheless, the case studies indicate that the choice between the OQ and DISC models is of less importance for guiding the practice of quality and safety improvement work, as they are both systemic and share some important characteristics. The main contribution of the models lay in their role as boundary objects directing attention towards organizational and systems thinking, culture, and collaboration

  8. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  9. The sdg interacting-boson model applied to 168Er

    Science.gov (United States)

    Yoshinaga, N.; Akiyama, Y.; Arima, A.

    1986-03-01

    The sdg interacting-boson model is applied to 168Er. Energy levels and E2 transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. The level scheme including the Kπ=3+ band is well reproduced and the calculated B(E2)'s are consistent with the experimental data.

  10. Remarks on orthotropic elastic models applied to wood

    Directory of Open Access Journals (Sweden)

    Nilson Tadeu Mascia

    2006-09-01

    Full Text Available Wood is generally considered an anisotropic material. In terms of engineering elastic models, wood is usually treated as an orthotropic material. This paper presents an analysis of two principal anisotropic elastic models that are usually applied to wood. The first one, the linear orthotropic model, where the material axes L (Longitudinal, R( radial and T(tangential are coincident with the Cartesian axes (x, y, z, is more accepted as wood elastic model. The other one, the cylindrical orthotropic model is more adequate of the growth caracteristics of wood but more mathematically complex to be adopted in practical terms. Specifically due to its importance in wood elastic parameters, this paper deals with the fiber orientation influence in these models through adequate transformation of coordinates. As a final result, some examples of the linear model, which show the variation of elastic moduli, i.e., Young´s modulus and shear modulus, with fiber orientation are presented.

  11. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  12. The limitations of applying rational decision-making models to ...

    African Journals Online (AJOL)

    The aim of this paper is to show the limitations of rational decision-making models as applied to child spacing and more specifically to the use of modern methods of contraception. In the light of factors known to influence low uptake of child spacing services in other African countries, suggestions are made to explain the ...

  13. Applying the Flipped Classroom Model to English Language Arts Education

    Science.gov (United States)

    Young, Carl A., Ed.; Moran, Clarice M., Ed.

    2017-01-01

    The flipped classroom method, particularly when used with digital video, has recently attracted many supporters within the education field. Now more than ever, language arts educators can benefit tremendously from incorporating flipped classroom techniques into their curriculum. "Applying the Flipped Classroom Model to English Language Arts…

  14. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  15. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  16. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  17. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  18. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  19. Applying a realistic evaluation model to occupational safety interventions

    DEFF Research Database (Denmark)

    Pedersen, Louise Møller

    2018-01-01

    Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal characte......Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal...... and qualitative methods. This revised model has, however, not been applied in a real life context. Method: The model is applied in a controlled, four-component, integrated behaviour-based and safety culture-based safety intervention study (2008-2010) in a medium-sized wood manufacturing company. The interventions...... involve the company’s safety committee, safety manager, safety groups and 130 workers. Results: The model provides a framework for more valid evidence of what works within injury prevention. Affective commitment and role behaviour among key actors are identified as crucial for the implementation...

  20. Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users

    DEFF Research Database (Denmark)

    Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane

    2011-01-01

    a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....

  1. Agrochemical fate models applied in agricultural areas from Colombia

    Science.gov (United States)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  2. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class......, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators...... and values of different stakeholders (i.e. institutions, educators, learners, and external agents), which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators...

  3. The asymmetric rotator model applied to odd-mass iridium isotopes

    International Nuclear Information System (INIS)

    Piepenbring, R.

    1980-04-01

    The method of inversion of the eigenvalue problem previously developed for nuclei with axial symmetry is extended to asymmetric equilibrium shapes. This new approach of the asymmetric rotator model is applied to the odd-mass iridium isotopes. A satisfactory and coherent description of the observed energy spectra is obtained, especially for the lighter isotopes

  4. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  5. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    Science.gov (United States)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  6. Applying model predictive control to power system frequency control

    OpenAIRE

    Ersdal, AM; Imsland, L; Cecilio, IM; Fabozzi, D; Thornhill, NF

    2013-01-01

    16.07.14 KB Ok to add accepted version to Spiral Model predictive control (MPC) is investigated as a control method which may offer advantages in frequency control of power systems than the control methods applied today, especially in presence of increased renewable energy penetration. The MPC includes constraints on both generation amount and generation rate of change, and it is tested on a one-area system. The proposed MPC is tested against a conventional proportional-integral (PI) cont...

  7. Applied model for the growth of the daytime mixed layer

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1991-01-01

    numerically. When the mixed layer is shallow or the atmosphere nearly neutrally stratified, the growth is controlled mainly by mechanical turbulence. When the layer is deep, its growth is controlled mainly by convective turbulence. The model is applied on a data set of the evolution of the height of the mixed...... layer in the morning hours, when both mechanical and convective turbulence contribute to the growth process. Realistic mixed-layer developments are obtained....

  8. Geometrical approach to fluid models

    International Nuclear Information System (INIS)

    Kuvshinov, B.N.; Schep, T.J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notion of invariance is introduced in terms of Lie derivatives and a general procedure for the construction of local and integral fluid invariants is presented. The solutions of the equations for invariant fields can be written in terms of Lagrange variables. A generalization of the Hamiltonian formalism for finite-dimensional systems to continuous media is proposed. Analogously to finite-dimensional systems, Hamiltonian fluids are introduced as systems that annihilate an exact two-form. It is shown that Euler and ideal, charged fluids satisfy this local definition of a Hamiltonian structure. A new class of scalar invariants of Hamiltonian fluids is constructed that generalizes the invariants that are related with gauge transformations and with symmetries (Noether). copyright 1997 American Institute of Physics

  9. Surface-bounded growth modeling applied to human mandibles

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt

    1999-01-01

    This thesis presents mathematical and computational techniques for three dimensional growth modeling applied to human mandibles. The longitudinal shape changes make the mandible a complex bone. The teeth erupt and the condylar processes change direction, from pointing predominantly backward...... of the common features. 3.model the process that moves the matched points (growth modeling). A local shape feature called crest line has shown itself to be structurally stable on mandibles. Registration of crest lines (from different mandibles) results in a sparse deformation field, which must be interpolated...... old mandible based on the 3 month old scan. When using successively more recent scans as basis for the model the error drops to 2.0 mm for the 11 years old scan. Thus, it seems reasonable to assume that the mandibular growth is linear....

  10. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  11. Global Environmental Change: An integrated modelling approach

    International Nuclear Information System (INIS)

    Den Elzen, M.

    1993-01-01

    Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)

  12. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir; Seljak, Uroš [Institute for Theoretical Physics, University of Zürich, Zürich (Switzerland); Okumura, Teppei [Institute for the Early Universe, Ewha Womans University, Seoul, S. Korea (Korea, Republic of); Desjacques, Vincent, E-mail: zvlah@physik.uzh.ch, E-mail: seljak@physik.uzh.ch, E-mail: teppei@ewha.ac.kr, E-mail: Vincent.Desjacques@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics (CAP) Université de Genéve, Genéve (Switzerland)

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  13. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    Science.gov (United States)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  14. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  15. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  16. A Monte Carlo approach applied to ultrasonic non-destructive testing

    Science.gov (United States)

    Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.

    2012-04-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface

  17. Hydrodynamics and water quality models applied to Sepetiba Bay

    Science.gov (United States)

    Cunha, Cynara de L. da N.; Rosman, Paulo C. C.; Ferreira, Aldo Pacheco; Carlos do Nascimento Monteiro, Teófilo

    2006-10-01

    A coupled hydrodynamic and water quality model is used to simulate the pollution in Sepetiba Bay due to sewage effluent. Sepetiba Bay has a complicated geometry and bottom topography, and is located on the Brazilian coast near Rio de Janeiro. In the simulation, the dissolved oxygen (DO) concentration and biochemical oxygen demand (BOD) are used as indicators for the presence of organic matter in the body of water, and as parameters for evaluating the environmental pollution of the eastern part of Sepetiba Bay. Effluent sources in the model are taken from DO and BOD field measurements. The simulation results are consistent with field observations and demonstrate that the model has been correctly calibrated. The model is suitable for evaluating the environmental impact of sewage effluent on Sepetiba Bay from river inflows, assessing the feasibility of different treatment schemes, and developing specific monitoring activities. This approach has general applicability for environmental assessment of complicated coastal bays.

  18. Climate Change and Market Collapse: A Model Applied to Darfur

    Directory of Open Access Journals (Sweden)

    Ola Olsson

    2016-03-01

    Full Text Available A recurring argument in the global debate is that climate deterioration is likely to make social conflicts over diminishing natural resources more common in the future. The exact mechanism behind such a development has so far not been successfully characterized in the literature. In this paper, we present a general model of a community populated by farmers and herders who can either divide up land in a market economy or in autarky. The key insight from our model is that decreasing resources can make trade between the two groups collapse, which in turn makes each group’s welfare independent of that of the other. Predictions from the model are then applied to the conflict in Darfur. Our analysis suggests that three decades of drought in the area can at least partially explain the observed disintegration of markets and the subsequent rise of social tensions.

  19. Liquid-drop model applied to heavy ions irradiation

    International Nuclear Information System (INIS)

    De Cicco, Hernan; Alurralde, Martin A.; Saint-Martin, Maria L. G.; Bernaola, Omar A.

    1999-01-01

    Liquid-drop model is used, previously applied in the study of radiation damage in metals, in an energy range not covered by molecular dynamics, in order to understand experimental data of particle tracks in an organic material (Makrofol E), which cannot be accurately described by the existing theoretical methods. The nuclear and electronic energy depositions are considered for each ion considered and the evolution of the thermal explosion is evaluated. The experimental observation of particle tracks in a region previously considered as 'prohibited' are justified. Although the model used has free parameters and some discrepancies with the experimental diametrical values exist, the agreement obtained is highly superior than that of other existing models. (author)

  20. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  1. Remote sensing applied to numerical modelling. [water resources pollution

    Science.gov (United States)

    Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.

    1975-01-01

    Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.

  2. Applying a Dynamic Resource Supply Model in a Smart Grid

    Directory of Open Access Journals (Sweden)

    Kaiyu Wan

    2014-09-01

    Full Text Available Dynamic resource supply is a complex issue to resolve in a cyber-physical system (CPS. In our previous work, a resource model called the dynamic resource supply model (DRSM has been proposed to handle resources specification, management and allocation in CPS. In this paper, we are integrating the DRSM with service-oriented architecture and applying it to a smart grid (SG, one of the most complex CPS examples. We give the detailed design of the SG for electricity charging request and electricity allocation between plug-in hybrid electric vehicles (PHEV and DRSM through the Android system. In the design, we explain a mechanism for electricity consumption with data collection and re-allocation through ZigBee network. In this design, we verify the correctness of this resource model for expected electricity allocation.

  3. Nature preservation acceptance model applied to tanker oil spill simulations

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2003-01-01

    is exemplified by a study of oil spills due to simulated tanker collisions in the Danish straits. It is found that the distribution of the oil spill volume per spill is well represented by an exponential distribution both in Oeresund and in Great Belt. When applied in the Poisson model, a risk profile reasonably...... acceptance criterion for the pollution of the environment. This NPWI acceptance criterion is applied to the oil spill example....... be defined in a similar way as the so-called Life Quality Index defined by Nathwani et al [Nathwani JS, Lind NC, Padey MD. Affordable safety by choice: the life quality method. Institute for Risk Research, University of Waterloo; Waterloo (Ontario, Canada):1997], and can be used to quantify the risk...

  4. Systematic care management: a comprehensive approach to catastrophic injury management applied to a catastrophic burn injury population--clinical, utilization, economic, and outcome data in support of the model.

    Science.gov (United States)

    Kucan, John; Bryant, Ernest; Dimick, Alan; Sundance, Paula; Cope, Nathan; Richards, Reginald; Anderson, Chris

    2010-01-01

    The new standard for successful burn care encompasses both patient survival and the burn patient's long-term quality of life. To provide optimal long-term recovery from catastrophic injuries, including catastrophic burns, an outcome-based model using a new technology called systematic care management (SCM) has been developed. SCM provides a highly organized system of management throughout the spectrum of care that provides access to outcome data, consistent oversight, broader access to expert providers, appropriate allocation of resources, and greater understanding of total costs. Data from a population of 209 workers' compensation catastrophic burn cases with a mean TBSA of 27.9% who were managed under the SCM model of care were analyzed. The data include treatment type, cost, return to work, and outcomes achieved. Mean duration of management to achieve all guaranteed outcomes was 20 months. Of the 209 injured workers, 152 (72.7%) achieved sufficient recovery to be released to return to work, of which 97 (46.8%) were both released and competitively employed. Assessment of 10 domains of functional independence indicated that 47.2% of injured workers required total assistance at initiation of SCM. However, at termination of SCM, 84% of those injured workers were fully independent in the 10 functional activities. When compared with other burn research outcome data, the results support the value of the SCM model of care.

  5. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  6. Inverse geothermal modelling applied to Danish sedimentary basins

    Science.gov (United States)

    Poulsen, Søren E.; Balling, Niels; Bording, Thue S.; Mathiesen, Anders; Nielsen, Søren B.

    2017-10-01

    This paper presents a numerical procedure for predicting subsurface temperatures and heat-flow distribution in 3-D using inverse calibration methodology. The procedure is based on a modified version of the groundwater code MODFLOW by taking advantage of the mathematical similarity between confined groundwater flow (Darcy's law) and heat conduction (Fourier's law). Thermal conductivity, heat production and exponential porosity-depth relations are specified separately for the individual geological units of the model domain. The steady-state temperature model includes a model-based transient correction for the long-term palaeoclimatic thermal disturbance of the subsurface temperature regime. Variable model parameters are estimated by inversion of measured borehole temperatures with uncertainties reflecting their quality. The procedure facilitates uncertainty estimation for temperature predictions. The modelling procedure is applied to Danish onshore areas containing deep sedimentary basins. A 3-D voxel-based model, with 14 lithological units from surface to 5000 m depth, was built from digital geological maps derived from combined analyses of reflection seismic lines and borehole information. Matrix thermal conductivity of model lithologies was estimated by inversion of all available deep borehole temperature data and applied together with prescribed background heat flow to derive the 3-D subsurface temperature distribution. Modelled temperatures are found to agree very well with observations. The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature gradients to depths of 2000-3000 m are generally around 25-30 °C km-1, locally up to about 35 °C km-1. Large regions have geothermal reservoirs with characteristic temperatures

  7. A novel approach to enhance food safety: industry-academia-government partnership for applied research.

    Science.gov (United States)

    Osterholm, Michael T; Ostrowsky, Julie; Farrar, Jeff A; Gravani, Robert B; Tauxe, Robert V; Buchanan, Robert L; Hedberg, Craig W

    2009-07-01

    An independent collaborative approach was developed for stimulating research on high-priority food safety issues. The Fresh Express Produce Safety Research Initiative was launched in 2007 with $2 million in unrestricted funds from industry and independent direction and oversight from a scientific advisory panel consisting of nationally recognized food safety experts from academia and government agencies. The program had two main objectives: (i) to fund rigorous, innovative, and multidisciplinary research addressing the safety of lettuce, spinach, and other leafy greens and (ii) to share research findings as widely and quickly as possible to support the development of advanced safeguards within the fresh-cut produce industry. Sixty-five proposals were submitted in response to a publicly announced request for proposals and were competitively evaluated. Nine research projects were funded to examine underlying factors involved in Escherichia coli O157:H7 contamination of lettuce, spinach, and other leafy greens and potential strategies for preventing the spread of foodborne pathogens. Results of the studies, published in the Journal of Food Protection, help to identify promising directions for future research into potential sources and entry points of contamination and specific factors associated with harvesting, processing, transporting, and storing produce that allow contaminants to persist and proliferate. The program provides a model for leveraging the strengths of industry, academia, and government to address high-priority issues quickly and directly through applied research. This model can be productively extended to other pathogens and other leafy and nonleafy produce.

  8. Applying the reasoned action approach to understanding health protection and health risk behaviors.

    Science.gov (United States)

    Conner, Mark; McEachan, Rosemary; Lawton, Rebecca; Gardner, Peter

    2017-12-01

    The Reasoned Action Approach (RAA) developed out of the Theory of Reasoned Action and Theory of Planned Behavior but has not yet been widely applied to understanding health behaviors. The present research employed the RAA in a prospective design to test predictions of intention and action for groups of protection and risk behaviors separately in the same sample. To test the RAA for health protection and risk behaviors. Measures of RAA components plus past behavior were taken in relation to eight protection and six risk behaviors in 385 adults. Self-reported behavior was assessed one month later. Multi-level modelling showed instrumental attitude, experiential attitude, descriptive norms, capacity and past behavior were significant positive predictors of intentions to engage in protection or risk behaviors. Injunctive norms were only significant predictors of intention in protection behaviors. Autonomy was a significant positive predictor of intentions in protection behaviors and a negative predictor in risk behaviors (the latter relationship became non-significant when controlling for past behavior). Multi-level modelling showed that intention, capacity, and past behavior were significant positive predictors of action for both protection and risk behaviors. Experiential attitude and descriptive norm were additional significant positive predictors of risk behaviors. The RAA has utility in predicting both protection and risk health behaviors although the power of predictors may vary across these types of health behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Active lubrication applied to radial gas journal bearings. Part 2: Modelling improvement and experimental validation

    DEFF Research Database (Denmark)

    Pierart, Fabián G.; Santos, Ilmar F.

    2016-01-01

    Actively-controlled lubrication techniques are applied to radial gas bearings aiming at enhancing one of their most critical drawbacks, their lack of damping. A model-based control design approach is presented using simple feedback control laws, i.e. proportional controllers. The design approach...... by finite element method and the global model is used as control design tool. Active lubrication allows for significant increase in damping factor of the rotor-bearing system. Very good agreement between theory and experiment is obtained, supporting the multi-physic design tool developed....

  10. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  11. Enhanced pid vs model predictive control applied to bldc motor

    Science.gov (United States)

    Gaya, M. S.; Muhammad, Auwal; Aliyu Abdulkadir, Rabiu; Salim, S. N. S.; Madugu, I. S.; Tijjani, Aminu; Aminu Yusuf, Lukman; Dauda Umar, Ibrahim; Khairi, M. T. M.

    2018-01-01

    BrushLess Direct Current (BLDC) motor is a multivariable and highly complex nonlinear system. Variation of internal parameter values with environment or reference signal increases the difficulty in controlling the BLDC effectively. Advanced control strategies (like model predictive control) often have to be integrated to satisfy the control desires. Enhancing or proper tuning of a conventional algorithm results in achieving the desired performance. This paper presents a performance comparison of Enhanced PID and Model Predictive Control (MPC) applied to brushless direct current motor. The simulation results demonstrated that the PSO-PID is slightly better than the PID and MPC in tracking the trajectory of the reference signal. The proposed scheme could be useful algorithms for the system.

  12. Elastic models: a comparative study applied to retinal images.

    Science.gov (United States)

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  13. Fuzzy uncertainty modeling applied to AP1000 nuclear power plant LOCA

    International Nuclear Information System (INIS)

    Ferreira Guimaraes, Antonio Cesar; Franklin Lapa, Celso Marcelo; Lamego Simoes Filho, Francisco Fernando; Cabral, Denise Cunha

    2011-01-01

    Research highlights: → This article presents an uncertainty modelling study using a fuzzy approach. → The AP1000 Westinghouse NPP was used and it is provided of passive safety systems. → The use of advanced passive safety systems in NPP has limited operational experience. → Failure rates and basic events probabilities used on the fault tree analysis. → Fuzzy uncertainty approach was employed to reliability of the AP1000 large LOCA. - Abstract: This article presents an uncertainty modeling study using a fuzzy approach applied to the Westinghouse advanced nuclear reactor. The AP1000 Westinghouse Nuclear Power Plant (NPP) is provided of passive safety systems, based on thermo physics phenomenon, that require no operating actions, soon after an incident has been detected. The use of advanced passive safety systems in NPP has limited operational experience. As it occurs in any reliability study, statistically non-significant events report introduces a significant uncertainty level about the failure rates and basic events probabilities used on the fault tree analysis (FTA). In order to model this uncertainty, a fuzzy approach was employed to reliability analysis of the AP1000 large break Loss of Coolant Accident (LOCA). The final results have revealed that the proposed approach may be successfully applied to modeling of uncertainties in safety studies.

  14. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  15. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  16. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  17. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    OpenAIRE

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing pla...

  18. Multimodal Approach for Automatic Emotion Recognition Applied to the Tension Levels Study in TV Newscasts

    Directory of Open Access Journals (Sweden)

    Moisés Henrique Ramos Pereira

    2015-12-01

    Full Text Available This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance with the distribution of audiovisual indicators extracted over a TV newscast, demonstrating the potential of the approach to support the TV journalistic discourse analysis.This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance

  19. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598.

  20. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598.

  1. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598

  2. A nationwide modelling approach to decommissioning - 16182

    International Nuclear Information System (INIS)

    Kelly, Bernard; Lowe, Andy; Mort, Paul

    2009-01-01

    In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)

  3. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  4. System approach to modeling of industrial technologies

    Science.gov (United States)

    Toropov, V. S.; Toropov, E. S.

    2018-03-01

    The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.

  5. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  6. An effective model for ergonomic optimization applied to a new automotive assembly line

    Energy Technology Data Exchange (ETDEWEB)

    Duraccio, Vincenzo [University Niccolò Cusano, Rome Via Don Gnocchi,00166, Roma Italy (Italy); Elia, Valerio [Dept. of Innovation Engineering - University of Salento Via Monteroni, 73100, Lecce (Italy); Forcina, Antonio [University Parthenope, Dep. of Engineering Centro Direzionale - Isola C4 80143 - Naples - Italy (Italy)

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  7. An effective model for ergonomic optimization applied to a new automotive assembly line

    International Nuclear Information System (INIS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-01-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  8. An effective model for ergonomic optimization applied to a new automotive assembly line

    Science.gov (United States)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  9. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  10. Stakeholder Theory As an Ethical Approach to Effective Management: applying the theory to multiple contexts

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Harrison

    2015-09-01

    Full Text Available Objective – This article provides a brief overview of stakeholder theory, clears up some widely held misconceptions, explains the importance of examining stakeholder theory from a variety of international perspectives and how this type of research will advance management theory, and introduces the other articles in the special issue. Design/methodology/approach – Some of the foundational ideas of stakeholder theory are discussed, leading to arguments about the importance of the theory to management research, especially in an international context. Findings – Stakeholder theory is found to be a particularly useful perspective for addressing some of the important issues in business from an international perspective. It offers an opportunity to reinterpret a variety of concepts, models and phenomena across may different disciplines. Practical implications – The concepts explored in this article may be applied in many contexts, domestically and internationally, and across business disciplines as diverse as economics, public administration, finance, philosophy, marketing, law, and management. Originality/value – Research on stakeholder theory in an international context is both lacking and sorely needed. This article and the others in this special issue aim to help fill that void.

  11. Linear model applied to the evaluation of pharmaceutical stability data

    Directory of Open Access Journals (Sweden)

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  12. Applying threshold models to donations to a green electricity fund

    International Nuclear Information System (INIS)

    Ito, Nobuyuki; Takeuchi, Kenji; Tsuge, Takahiro; Kishimoto, Atsuo

    2010-01-01

    This study applies a threshold model proposed by to analyze the diffusion process of donating behavior for renewable energy. We first use a stated preference survey to estimate the determinants of a decision to support the donation scheme under various predicted participation rates. Using the estimated coefficients, we simulate how herd behavior spreads and the participation rate reaches the equilibrium. The participation rate at the equilibrium is estimated as 37.88% when the suggested donation is 500 yen, while it is 17.76% when the suggested amount is 1000 yen. The influence of environmentalism and altruism is also examined, and we find that these motivations increase the participation rate by 31.51% on average.

  13. Transient heat conduction in a pebble fuel applying fractional model

    International Nuclear Information System (INIS)

    Gomez A, R.; Espinosa P, G.

    2009-10-01

    In this paper we presents the equation of thermal diffusion of temporary-fractional order in the one-dimensional space in spherical coordinates, with the objective to analyze the heat transference between the fuel and coolant in a fuel element of a Pebble Bed Modular Reactor. The pebble fuel is the heterogeneous system made by microsphere constitutes by U O, pyrolytic carbon and silicon carbide mixed with graphite. To describe the heat transfer phenomena in the pebble fuel we applied a constitutive law fractional (Non-Fourier) in order to analyze the behaviour transient of the temperature distribution in the pebble fuel with anomalous thermal diffusion effects a numerical model is developed. (Author)

  14. A whole-of-curriculum approach to improving nursing students' applied numeracy skills.

    Science.gov (United States)

    van de Mortel, Thea F; Whitehair, Leeann P; Irwin, Pauletta M

    2014-03-01

    Nursing students often perform poorly on numeracy tests. Whilst one-off interventions have been trialled with limited success, a whole-of-curriculum approach may provide a better means of improving applied numeracy skills. The objective of the study is to assess the efficacy of a whole-of-curriculum approach in improving nursing students' applied numeracy skills. Two cycles of assessment, implementation and evaluation of strategies were conducted following a high fail rate in the final applied numeracy examination in a Bachelor of Nursing (BN) programme. Strategies included an early diagnostic assessment followed by referral to remediation, setting the pass mark at 100% for each of six applied numeracy examinations across the programme, and employing a specialist mathematics teacher to provide consistent numeracy teaching. The setting of the study is one Australian university. 1035 second and third year nursing students enrolled in four clinical nursing courses (CNC III, CNC IV, CNC V and CNC VI) were included. Data on the percentage of students who obtained 100% in their applied numeracy examination in up to two attempts were collected from CNCs III, IV, V and VI between 2008 and 2011. A four by two χ(2) contingency table was used to determine if the differences in the proportion of students achieving 100% across two examination attempts in each CNC were significantly different between 2008 and 2011. The percentage of students who obtained 100% correct answers on the applied numeracy examinations was significantly higher in 2011 than in 2008 in CNC III (χ(2)=272, 3; p<0.001), IV (χ(2)=94.7, 3; p<0.001) and VI (χ(2)=76.3, 3; p<0.001). A whole-of-curriculum approach to developing applied numeracy skills in BN students resulted in a substantial improvement in these skills over four years. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Blood transfusion determines postoperative morbidity in pediatric cardiac surgery applying a comprehensive blood-sparing approach.

    Science.gov (United States)

    Redlin, Matthias; Kukucka, Marian; Boettcher, Wolfgang; Schoenfeld, Helge; Huebler, Michael; Kuppe, Hermann; Habazettl, Helmut

    2013-09-01

    Recently we suggested a comprehensive blood-sparing approach in pediatric cardiac surgery that resulted in no transfusion in 71 infants (25%), postoperative transfusion only in 68 (24%), and intraoperative transfusion in 149 (52%). We analyzed the effects of transfusion on postoperative morbidity and mortality in the same cohort of patients. The effect of transfusion on the length of mechanical ventilation and intensive care unit stay was assessed using Kaplan-Meier curves. To assess whether transfusion independently determined the length of mechanical ventilation and length of intensive care unit stay, a multivariate model was applied. Additionally, in the subgroup of transfused infants, the effect of the applied volume of packed red blood cells was assessed. The median length of mechanical ventilation was 11 hours (interquartile range, 9-18 hours), 33 hours (interquartile range, 18-80 hours), and 93 hours (interquartile range, 34-161 hours) in the no transfusion, postoperative transfusion only, and intraoperative transfusion groups, respectively (P interquartile range, 1-2 days), 3.5 days (interquartile range, 2-5 days), and 8 days (interquartile range, 3-9 days; P < .00001). The multivariate hazard ratio for early extubation was 0.24 (95% confidence interval, 0.16-0.35) and 0.37 (95% confidence interval, 0.25-0.55) for the intraoperative transfusion and postoperative transfusion only groups, respectively (P < .00001). In addition, the cardiopulmonary time, body weight, need for reoperation, and hemoglobin during cardiopulmonary bypass affected the length of mechanical ventilation. Similar results were obtained for the length of intensive care unit stay. In the subgroup of transfused infants, the volume of packed red blood cells also independently affected both the length of mechanical ventilation and the length of intensive care unit stay. The incidence and volume of blood transfusion markedly affects postoperative morbidity in pediatric cardiac surgery. These

  16. MODELLING AND SIMULATING RISKS IN THE TRAINING OF THE HUMAN RESOURCES BY APPLYING THE CHAOS THEORY

    OpenAIRE

    Eugen ROTARESCU

    2012-01-01

    The article approaches the modelling and simulation of risks in the training of the human resources, as well as the forecast of the degree of human resources training impacted by risks by applying the mathematical tools offered by the Chaos Theory and mathematical statistics. We will highlight that the level of knowledge, skills and abilities of the human resources from an organization are autocorrelated in time and they depend on the level of a previous moment of the training, as well as on ...

  17. Multidisciplinary Management: Model of Excellence in the Management Applied to Products and Services

    OpenAIRE

    Guerreiro , Evandro ,; Costa Neto , Pedro ,; Moreira Filho , Ulysses ,

    2014-01-01

    Part 1: Knowledge-Based Performance Improvement; International audience; The Multidisciplinary Management is the guiding vision of modern organizations and the systems thinking which requires new approaches to organizational excellence and quality management process. The objective of this article is to present a model for multidisciplinary management of quality applied to products and services based on American, Japanese, and Brazilian National Quality Awards. The methodology used to build th...

  18. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...

  19. Gordon's model applied to nursing care of people with depression.

    Science.gov (United States)

    Temel, M; Kutlu, F Y

    2015-12-01

    Psychiatric nurses should consider the patient's biological, psychological and social aspects. Marjory Gordon's Functional Health Pattern Model ensures a holistic approach for the patient. To examine the effectiveness of Gordon's Functional Health Pattern Model in reducing depressive symptoms, increasing self-efficacy, coping with depression and increasing hope in people with depression. A quasi-experimental two-group pre-test and post-test design was adopted. Data were collected from April 2013 to May 2014 from people with depression at the psychiatry clinic of a state hospital in Turkey; they were assigned to the intervention (n = 34) or control group (n = 34). The intervention group received nursing care according to Gordon's Functional Health Pattern Model and routine care, while the control group received routine care only. The Beck Depression Inventory, Beck Hopelessness Scale and Depression Coping Self-Efficacy Scale were used. The intervention group had significantly lower scores on the Beck Depression Inventory and Beck Hopelessness Scale at the post-test and 3-month follow-up; they had higher scores on the Depression Coping Self-Efficacy Scale at the 3-month follow-up when compared with the control group. The study was conducted at only one psychiatry clinic. The intervention and control group patients were at the clinic at the same time and influenced each other. Moreover, because clinical routines were in progress during the study, the results cannot only be attributed to nursing interventions. Nursing models offer guidance for the care provided. Practices based on the models return more efficient and systematic caregiving results with fewer health problems. Gordon's Functional Health Pattern Model was effective in improving the health of people with depression and could be introduced as routine care with ongoing evaluation in psychiatric clinics. More research is needed to evaluate Gordon's Nursing Model effect on people with depression. Future

  20. A theoretical intellectual capital model applied to cities

    Directory of Open Access Journals (Sweden)

    José Luis Alfaro Navarro

    2013-06-01

    Full Text Available New Management Information Systems (MIS are necessary at local level as the main source of wealth creation. Therefore, tools and approaches that provide a full future vision of any organization should be a strategic priority for economic development. In this line, cities are “centers of knowledge and sources of growth and innovation” and integrated urban development policies are necessary. These policies support communication networks and optimize location structures as strategies that provide opportunities for social and democratic participation for the citizens. This paper proposes a theoretical model to measure and evaluate the cities intellectual capital that allows determine what we must take into account to make cities a source of wealth, prosperity, welfare and future growth. Furthermore, local intellectual capital provides a long run vision. Thus, in this paper we develop and explain how to implement a model to estimate intellectual capital in cities. In this sense, our proposal is to provide a model for measuring and managing intellectual capital using socio-economic indicators for cities. These indicators offer a long term picture supported by a comprehensive strategy for those who occupy the local space, infrastructure for implementation and management of the environment for its development.

  1. Simulation of Road Traffic Applying Model-Driven Engineering

    Directory of Open Access Journals (Sweden)

    Alberto FERNÁNDEZ-ISABEL

    2016-05-01

    Full Text Available Road traffic is an important phenomenon in modern societies. The study of its different aspects in the multiple scenarios where it happens is relevant for a huge number of problems. At the same time, its scale and complexity make it hard to study. Traffic simulations can alleviate these difficulties, simplifying the scenarios to consider and controlling their variables. However, their development also presents difficulties. The main ones come from the need to integrate the way of working of researchers and developers from multiple fields. Model-Driven Engineering (MDE addresses these problems using Modelling Languages (MLs and semi-automatic transformations to organise and describe the development, from requirements to code. This paper presents a domain-specific MDE framework for simulations of road traffic. It comprises an extensible ML, support tools, and development guidelines. The ML adopts an agent-based approach, which is focused on the roles of individuals in road traffic and their decision-making. A case study shows the process to model a traffic theory with the ML, and how to specialise that specification for an existing target platform and its simulations. The results are the basis for comparison with related work.

  2. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    International Nuclear Information System (INIS)

    Kirk Nordstrom, D.

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  3. Applying the health action process approach to bicycle helmet use and evaluating a social marketing campaign.

    Science.gov (United States)

    Karl, Florian M; Smith, Jennifer; Piedt, Shannon; Turcotte, Kate; Pike, Ian

    2017-08-05

    Bicycle injuries are of concern in Canada. Since helmet use was mandated in 1996 in the province of British Columbia, Canada, use has increased and head injuries have decreased. Despite the law, many cyclists do not wear a helmet. Health action process approach (HAPA) model explains intention and behaviour with self-efficacy, risk perception, outcome expectancies and planning constructs. The present study examines the impact of a social marketing campaign on HAPA constructs in the context of bicycle helmet use. A questionnaire was administered to identify factors determining helmet use. Intention to obey the law, and perceived risk of being caught if not obeying the law were included as additional constructs. Path analysis was used to extract the strongest influences on intention and behaviour. The social marketing campaign was evaluated through t-test comparisons after propensity score matching and generalised linear modelling (GLM) were applied to adjust for the same covariates. 400 cyclists aged 25-54 years completed the questionnaire. Self-efficacy and Intention were most predictive of intention to wear a helmet, which, moderated by planning, strongly predicted behaviour. Perceived risk and outcome expectancies had no significant impact on intention. GLM showed that exposure to the campaign was significantly associated with higher values in self-efficacy, intention and bicycle helmet use. Self-efficacy and planning are important points of action for promoting helmet use. Social marketing campaigns that remind people of appropriate preventive action have an impact on behaviour. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Applying the welfare model to at-own-risk discharges.

    Science.gov (United States)

    Krishna, Lalit Kumar Radha; Menon, Sumytra; Kanesvaran, Ravindran

    2017-08-01

    "At-own-risk discharges" or "self-discharges" evidences an irretrievable breakdown in the patient-clinician relationship when patients leave care facilities before completion of medical treatment and against medical advice. Dissolution of the therapeutic relationship terminates the physician's duty of care and professional liability with respect to care of the patient. Acquiescence of an at-own-risk discharge by the clinician is seen as respecting patient autonomy. The validity of such requests pivot on the assumptions that the patient is fully informed and competent to invoke an at-own-risk discharge and that care up to the point of the at-own-risk discharge meets prevailing clinical standards. Palliative care's use of a multidisciplinary team approach challenges both these assumptions. First by establishing multiple independent therapeutic relations between professionals in the multidisciplinary team and the patient who persists despite an at-own-risk discharge. These enduring therapeutic relationships negate the suggestion that no duty of care is owed the patient. Second, the continued employ of collusion, familial determinations, and the circumnavigation of direct patient involvement in family-centric societies compromises the patient's decision-making capacity and raises questions as to the patient's decision-making capacity and their ability to assume responsibility for the repercussions of invoking an at-own-risk discharge. With the validity of at-own-risk discharge request in question and the welfare and patient interest at stake, an alternative approach to assessing at-own-risk discharge requests are called for. The welfare model circumnavigates these concerns and preserves the patient's welfare through the employ of a multidisciplinary team guided holistic appraisal of the patient's specific situation that is informed by clinical and institutional standards and evidenced-based practice. The welfare model provides a robust decision-making framework for

  5. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Le Riche R.

    2010-06-01

    Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their

  6. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Science.gov (United States)

    Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.

    2010-06-01

    A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test) which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel) and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD) of the full fields in order to drastically reduce their dimensionality. POD is

  7. Uncharted territory: A complex systems approach as an emerging paradigm in applied linguistics

    Directory of Open Access Journals (Sweden)

    Weideman, Albert J

    2009-12-01

    Full Text Available Developing a theory of applied linguistics is a top priority for the discipline today. The emergence of a new paradigm - a complex systems approach - in applied linguistics presents us with a unique opportunity to give prominence to the development of a foundational framework for this design discipline. Far from being a mere philosophical exercise, such a framework will find application in the training and induction of new entrants into the discipline within the developing context of South Africa, as well as internationally.

  8. Applying the Expectancy-Value Model to understand health values.

    Science.gov (United States)

    Zhang, Xu-Hao; Xie, Feng; Wee, Hwee-Lin; Thumboo, Julian; Li, Shu-Chuen

    2008-03-01

    Expectancy-Value Model (EVM) is the most structured model in psychology to predict attitudes by measuring attitudinal attributes (AAs) and relevant external variables. Because health value could be categorized as attitude, we aimed to apply EVM to explore its usefulness in explaining variances in health values and investigate underlying factors. Focus group discussion was carried out to identify the most common and significant AAs toward 5 different health states (coded as 11111, 11121, 21221, 32323, and 33333 in EuroQol Five-Dimension (EQ-5D) descriptive system). AAs were measured in a sum of multiplications of subjective probability (expectancy) and perceived value of attributes with 7-point Likert scales. Health values were measured using visual analog scales (VAS, range 0-1). External variables (age, sex, ethnicity, education, housing, marital status, and concurrent chronic diseases) were also incorporated into survey questionnaire distributed by convenience sampling among eligible respondents. Univariate analyses were used to identify external variables causing significant differences in VAS. Multiple linear regression model (MLR) and hierarchical regression model were used to investigate the explanatory power of AAs and possible significant external variable(s) separately or in combination, for each individual health state and a mixed scenario of five states, respectively. Four AAs were identified, namely, "worsening your quality of life in terms of health" (WQoL), "adding a burden to your family" (BTF), "making you less independent" (MLI) and "unable to work or study" (UWS). Data were analyzed based on 232 respondents (mean [SD] age: 27.7 [15.07] years, 49.1% female). Health values varied significantly across 5 health states, ranging from 0.12 (33333) to 0.97 (11111). With no significant external variables identified, EVM explained up to 62% of the variances in health values across 5 health states. The explanatory power of 4 AAs were found to be between 13

  9. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  10. Applying the archetype approach to the database of a biobank information management system.

    Science.gov (United States)

    Späth, Melanie Bettina; Grimson, Jane

    2011-03-01

    The purpose of this study is to investigate the feasibility of applying the openEHR archetype approach to modelling the data in the database of an existing proprietary biobank information management system. A biobank information management system stores the clinical/phenotypic data of the sample donor and sample related information. The clinical/phenotypic data is potentially sourced from the donor's electronic health record (EHR). The study evaluates the reuse of openEHR archetypes that have been developed for the creation of an interoperable EHR in the context of biobanking, and proposes a new set of archetypes specifically for biobanks. The ultimate goal of the research is the development of an interoperable electronic biomedical research record (eBMRR) to support biomedical knowledge discovery. The database of the prostate cancer biobank of the Irish Prostate Cancer Research Consortium (PCRC), which supports the identification of novel biomarkers for prostate cancer, was taken as the basis for the modelling effort. First the database schema of the biobank was analyzed and reorganized into archetype-friendly concepts. Then, archetype repositories were searched for matching archetypes. Some existing archetypes were reused without change, some were modified or specialized, and new archetypes were developed where needed. The fields of the biobank database schema were then mapped to the elements in the archetypes. Finally, the archetypes were arranged into templates specifically to meet the requirements of the PCRC biobank. A set of 47 archetypes was found to cover all the concepts used in the biobank. Of these, 29 (62%) were reused without change, 6 were modified and/or extended, 1 was specialized, and 11 were newly defined. These archetypes were arranged into 8 templates specifically required for this biobank. A number of issues were encountered in this research. Some arose from the immaturity of the archetype approach, such as immature modelling support tools

  11. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  12. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  13. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  14. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  15. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  16. Applying Quality Function Deployment Model in Burn Unit Service Improvement.

    Science.gov (United States)

    Keshtkaran, Ali; Hashemi, Neda; Kharazmi, Erfan; Abbasi, Mehdi

    2016-01-01

    Quality function deployment (QFD) is one of the most effective quality design tools. This study applies QFD technique to improve the quality of the burn unit services in Ghotbedin Hospital in Shiraz, Iran. First, the patients' expectations of burn unit services and their priorities were determined through Delphi method. Thereafter, burn unit service specifications were determined through Delphi method. Further, the relationships between the patients' expectations and service specifications and also the relationships between service specifications were determined through an expert group's opinion. Last, the final importance scores of service specifications were calculated through simple additive weighting method. The findings show that burn unit patients have 40 expectations in six different areas. These expectations are in 16 priority levels. Burn units also have 45 service specifications in six different areas. There are four-level relationships between the patients' expectations and service specifications and four-level relationships between service specifications. The most important burn unit service specifications have been identified in this study. The QFD model developed in the study can be a general guideline for QFD planners and executives.

  17. Electrodynamic modeling applied to micro-strip gas chambers

    International Nuclear Information System (INIS)

    Fang, R.

    1998-01-01

    Gas gain variations as functions of time, counting rate and substrate resistivity have been observed with Micro-Strip Gas Chambers (MSGC). Such a chamber is here treated as a system of 2 dielectrics, gas and substrate, with finite resistivities. Electric charging between their interface results in variations of the electric field and the gas gain. The electrodynamic equations (including time dependence) for such a system are proposed. A Rule of Charge Accumulation (RCA) is then derived which allows to determine the quantity and sign of charges accumulated on the surface at equilibrium. In order to apply the equations and the rule to MSGCs, a model of gas conductance induced by ionizing radiation is proposed, and a differential equation and some formulae are derived to calculate the rms dispersion and the spatial distribution of electrons (ions) in inhomogeneous electric fields. RCA coupled with a precise simulation of the electric fields gives the first quantitative explanation of gas gain variations of MSGCs. Finally an electrodynamic simulation program is made to reproduce the dynamic process of gain variation due to surface charging with an uncertainty of at most 15% relative to experimental data. As a consequence, the methods for stabilizing operation of MSGCs are proposed. (author)

  18. A strategy to apply a graded approach to a new research reactor I and C design

    International Nuclear Information System (INIS)

    Suh, Yong Suk; Park, Jae Kwan; Kim, Taek Kyu; Bae, Sang Hoon; Baang, Dane; Kim, Young Ki

    2012-01-01

    A project for the development of a new research reactor (NRR) was launched by KAERI in 2012. It has two purposes: 1) providing a facility for radioisotope production, neutron transmutation doping, and semiconductor wafer doping, and 2) obtaining a standard model for exporting a research reactor (RR). The instrumentation and control (I and C) design should reveal an appropriate architecture for the NRR export. The adoption of a graded approach (GA) was taken into account to design the I and C and architecture. Although the GA for RRs is currently under development by the IAEA, it has been recommended and applied in many areas of nuclear facilities. The Canadian Nuclear Safety Commission allows for the use of a GA for RRs to meet the safety requirements. Germany applied the GA to a decommissioning project. It categorized the level of complexity of the decommissioning project using the GA. In the case of 10 C.F.R. Part 830 830.7, a contractor must use a GA to implement the requirements of the part, document the basis of the GA used, and submit that document to U.S. DOE. It mentions that a challenge is the inconsistent application of GA on DOE programs. RG 1.176 states that graded quality assurance brings benefits of resource allocation based on the safety significance of the items. The U.S. NRC also applied the GA to decommissioning small facilities. The NASA published a handbook for risk informed decision making that is conducted using a GA. ISATR67.04.09 2005 supplements ANSI/ISA.S67.04.01. 2000 and ISA RP67.04.02 2000 in determining the setpoint using a GA. The GA is defined as a risk informed approach that, without compromising safety, allows safety requirements to be implemented in such a way that the level of design, analysis, and documentation are commensurate with the potential risks of the reactor. The IAEA is developing a GA through DS351 and has recommended applying it to a reactor design according to power and hazarding level. Owing to the wide range of RR

  19. A strategy to apply a graded approach to a new research reactor I and C design

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Yong Suk; Park, Jae Kwan; Kim, Taek Kyu; Bae, Sang Hoon; Baang, Dane; Kim, Young Ki [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    A project for the development of a new research reactor (NRR) was launched by KAERI in 2012. It has two purposes: 1) providing a facility for radioisotope production, neutron transmutation doping, and semiconductor wafer doping, and 2) obtaining a standard model for exporting a research reactor (RR). The instrumentation and control (I and C) design should reveal an appropriate architecture for the NRR export. The adoption of a graded approach (GA) was taken into account to design the I and C and architecture. Although the GA for RRs is currently under development by the IAEA, it has been recommended and applied in many areas of nuclear facilities. The Canadian Nuclear Safety Commission allows for the use of a GA for RRs to meet the safety requirements. Germany applied the GA to a decommissioning project. It categorized the level of complexity of the decommissioning project using the GA. In the case of 10 C.F.R. Part 830 830.7, a contractor must use a GA to implement the requirements of the part, document the basis of the GA used, and submit that document to U.S. DOE. It mentions that a challenge is the inconsistent application of GA on DOE programs. RG 1.176 states that graded quality assurance brings benefits of resource allocation based on the safety significance of the items. The U.S. NRC also applied the GA to decommissioning small facilities. The NASA published a handbook for risk informed decision making that is conducted using a GA. ISATR67.04.09 2005 supplements ANSI/ISA.S67.04.01. 2000 and ISA RP67.04.02 2000 in determining the setpoint using a GA. The GA is defined as a risk informed approach that, without compromising safety, allows safety requirements to be implemented in such a way that the level of design, analysis, and documentation are commensurate with the potential risks of the reactor. The IAEA is developing a GA through DS351 and has recommended applying it to a reactor design according to power and hazarding level. Owing to the wide range of RR

  20. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  1. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    International Nuclear Information System (INIS)

    Nakae, Nobuo; Ozawa, Takayuki; Ohta, Hirokazu; Ogata, Takanari; Sekimoto, Hiroshi

    2014-01-01

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel

  2. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nakae, Nobuo, E-mail: nakae-nobuo@jnes.go.jp [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan); Ozawa, Takayuki [Advanced Nuclear System Research and Development Directorate, Japan Atomic Energy Agency, 4-33, Muramatsu, Tokai-mura, Ibaraki-ken 319-1194 (Japan); Ohta, Hirokazu; Ogata, Takanari [Nuclear Technology Research Laboratory, Central Research Institute of Electric Power Industry, 2-11-1, Iwado Kita, Komae-shi, Tokyo 201-8511 (Japan); Sekimoto, Hiroshi [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan)

    2014-03-15

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel.

  3. Fluid Intelligence as a Predictor of Learning: A Longitudinal Multilevel Approach Applied to Math

    Science.gov (United States)

    Primi, Ricardo; Ferrao, Maria Eugenia; Almeida, Leandro S.

    2010-01-01

    The association between fluid intelligence and inter-individual differences was investigated using multilevel growth curve modeling applied to data measuring intra-individual improvement on math achievement tests. A sample of 166 students (88 boys and 78 girls), ranging in age from 11 to 14 (M = 12.3, SD = 0.64), was tested. These individuals took…

  4. Reynolds stress turbulence model applied to two-phase pressurized thermal shocks in nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Mérigoux, Nicolas, E-mail: nicolas.merigoux@edf.fr; Laviéville, Jérôme; Mimouni, Stéphane; Guingo, Mathieu; Baudry, Cyril

    2016-04-01

    Highlights: • NEPTUNE-CFD is used to model two-phase PTS. • k-ε model did produce some satisfactory results but also highlights some weaknesses. • A more advanced turbulence model has been developed, validated and applied for PTS. • Coupled with LIM, the first results confirmed the increased accuracy of the approach. - Abstract: Nuclear power plants are subjected to a variety of ageing mechanisms and, at the same time, exposed to potential pressurized thermal shock (PTS) – characterized by a rapid cooling of the internal Reactor Pressure Vessel (RPV) surface. In this context, NEPTUNE-CFD is used to model two-phase PTS and give an assessment on the structural integrity of the RPV. The first available choice was to use standard first order turbulence model (k-ε) to model high-Reynolds number flows encountered in Pressurized Water Reactor (PWR) primary circuits. In a first attempt, the use of k-ε model did produce some satisfactory results in terms of condensation rate and temperature field distribution on integral experiments, but also highlights some weaknesses in the way to model highly anisotropic turbulence. One way to improve the turbulence prediction – and consequently the temperature field distribution – is to opt for more advanced Reynolds Stress turbulence Model. After various verification and validation steps on separated effects cases – co-current air/steam-water stratified flows in rectangular channels, water jet impingements on water pool free surfaces – this Reynolds Stress turbulence Model (R{sub ij}-ε SSG) has been applied for the first time to thermal free surface flows under industrial conditions on COSI and TOPFLOW-PTS experiments. Coupled with the Large Interface Model, the first results confirmed the adequacy and increased accuracy of the approach in an industrial context.

  5. International Conference on Applied Mathematics, Modeling and Computational Science & Annual meeting of the Canadian Applied and Industrial Mathematics

    CERN Document Server

    Bélair, Jacques; Kunze, Herb; Makarov, Roman; Melnik, Roderick; Spiteri, Raymond J

    2016-01-01

    Focusing on five main groups of interdisciplinary problems, this book covers a wide range of topics in mathematical modeling, computational science and applied mathematics. It presents a wealth of new results in the development of modeling theories and methods, advancing diverse areas of applications and promoting interdisciplinary interactions between mathematicians, scientists, engineers and representatives from other disciplines. The book offers a valuable source of methods, ideas, and tools developed for a variety of disciplines, including the natural and social sciences, medicine, engineering, and technology. Original results are presented on both the fundamental and applied level, accompanied by an ample number of real-world problems and examples emphasizing the interdisciplinary nature and universality of mathematical modeling, and providing an excellent outline of today’s challenges. Mathematical modeling, with applied and computational methods and tools, plays a fundamental role in modern science a...

  6. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  7. Effects produced by oscillations applied to nonlinear dynamic systems: a general approach and examples

    DEFF Research Database (Denmark)

    Blekhman, I. I.; Sorokin, V. S.

    2016-01-01

    A general approach to study effects produced by oscillations applied to nonlinear dynamic systems is developed. It implies a transition from initial governing equations of motion to much more simple equations describing only the main slow component of motions (the vibro-transformed dynamics.......g., the requirement for the involved nonlinearities to be weak. The approach is illustrated by several relevant examples from various fields of science, e.g., mechanics, physics, chemistry and biophysics....... equations). The approach is named as the oscillatory strobodynamics, since motions are perceived as under a stroboscopic light. The vibro-transformed dynamics equations comprise terms that capture the averaged effect of oscillations. The method of direct separation of motions appears to be an efficient...

  8. Dynamic model reduction using data-driven Loewner-framework applied to thermally morphing structures

    Science.gov (United States)

    Phoenix, Austin A.; Tarazaga, Pablo A.

    2017-05-01

    The work herein proposes the use of the data-driven Loewner-framework for reduced order modeling as applied to dynamic Finite Element Models (FEM) of thermally morphing structures. The Loewner-based modeling approach is computationally efficient and accurately constructs reduced models using analytical output data from a FEM. This paper details the two-step process proposed in the Loewner approach. First, a random vibration FEM simulation is used as the input for the development of a Single Input Single Output (SISO) data-based dynamic Loewner state space model. Second, an SVD-based truncation is used on the Loewner state space model, such that the minimal, dynamically representative, state space model is achieved. For this second part, varying levels of reduction are generated and compared. The work herein can be extended to model generation using experimental measurements by replacing the FEM output data in the first step and following the same procedure. This method will be demonstrated on two thermally morphing structures, a rigidly fixed hexapod in multiple geometric configurations and a low mass anisotropic morphing boom. This paper is working to detail the method and identify the benefits of the reduced model methodology.

  9. Applying the Business Process and Practice Alignment Meta-model: Daily Practices and Process Modelling

    Directory of Open Access Journals (Sweden)

    Ventura Martins Paula

    2017-03-01

    Full Text Available Background: Business Process Modelling (BPM is one of the most important phases of information system design. Business Process (BP meta-models allow capturing informational and behavioural aspects of business processes. Unfortunately, standard BP meta-modelling approaches focus just on process description, providing different BP models. It is not possible to compare and identify related daily practices in order to improve BP models. This lack of information implies that further research in BP meta-models is needed to reflect the evolution/change in BP. Considering this limitation, this paper introduces a new BP meta-model designed by Business Process and Practice Alignment Meta-model (BPPAMeta-model. Our intention is to present a meta-model that addresses features related to the alignment between daily work practices and BP descriptions. Objectives: This paper intends to present a metamodel which is going to integrate daily work information into coherent and sound process definitions. Methods/Approach: The methodology employed in the research follows a design-science approach. Results: The results of the case study are related to the application of the proposed meta-model to align the specification of a BP model with work practices models. Conclusions: This meta-model can be used within the BPPAM methodology to specify or improve business processes models based on work practice descriptions.

  10. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  11. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...

  12. Applied Research Consultants (ARC): A Vertical Practicum Model of Training Applied Research

    Science.gov (United States)

    Nadler, Joel T.; Cundiff, Nicole L.

    2009-01-01

    The demand for highly trained evaluation consultants is increasing. Furthermore, the gap between job seekers' evaluation competencies and job recruiters' expectations suggests a need for providing practical training experiences. A model using a vertical practicum (advanced students assisting in the training of newer students) is suggested as an…

  13. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    OpenAIRE

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid mod...

  14. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  15. The possibilities of applying a risk-oriented approach to the NPP reliability and safety enhancement problem

    Science.gov (United States)

    Komarov, Yu. A.

    2014-10-01

    An analysis and some generalizations of approaches to risk assessments are presented. Interconnection between different interpretations of the "risk" notion is shown, and the possibility of applying the fuzzy set theory to risk assessments is demonstrated. A generalized formulation of the risk assessment notion is proposed in applying risk-oriented approaches to the problem of enhancing reliability and safety in nuclear power engineering. The solution of problems using the developed risk-oriented approaches aimed at achieving more reliable and safe operation of NPPs is described. The results of studies aimed at determining the need (advisability) to modernize/replace NPP elements and systems are presented together with the results obtained from elaborating the methodical principles of introducing the repair concept based on the equipment technical state. The possibility of reducing the scope of tests and altering the NPP systems maintenance strategy is substantiated using the risk-oriented approach. A probabilistic model for estimating the validity of boric acid concentration measurements is developed.

  16. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  17. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    Science.gov (United States)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  18. A Regional Guidebook for Applying The Approach to Assessing Wetland Functions of Depressed Wetlands in Peninsular, Florida

    National Research Council Canada - National Science Library

    Noble, Chris

    2004-01-01

    The Hydrogeomophic (HGM) Approach is a method for developing functional indices and the protocols used to apply these indices to the assessment of wetland functions at a site-specific scale The HGM Approach was initially...

  19. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  20. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented...

  1. Applying Functional Modeling for Accident Management of Nucler Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented....

  2. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    Science.gov (United States)

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the

  3. Applying nonlinear MODM model to supply chain management with quantity discount policy under complex fuzzy environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2014-06-01

    Full Text Available Purpose: The aim of this paper is to deal with the supply chain management (SCM with quantity discount policy under the complex fuzzy environment, which is characterized as the bi-fuzzy variables. By taking into account the strategy and the process of decision making, a bi-fuzzy nonlinear multiple objective decision making (MODM model is presented to solve the proposed problem.Design/methodology/approach: The bi-fuzzy variables in the MODM model are transformed into the trapezoidal fuzzy variables by the DMs's degree of optimism ?1 and ?2, which are de-fuzzified by the expected value index subsequently. For solving the complex nonlinear model, a multi-objective adaptive particle swarm optimization algorithm (MO-APSO is designed as the solution method.Findings: The proposed model and algorithm are applied to a typical example of SCM problem to illustrate the effectiveness. Based on the sensitivity analysis of the results, the bi-fuzzy nonlinear MODM SCM model is proved to be sensitive to the possibility level ?1.Practical implications: The study focuses on the SCM under complex fuzzy environment in SCM, which has a great practical significance. Therefore, the bi-fuzzy MODM model and MO-APSO can be further applied in SCM problem with quantity discount policy.Originality/value: The bi-fuzzy variable is employed in the nonlinear MODM model of SCM to characterize the hybrid uncertain environment, and this work is original. In addition, the hybrid crisp approach is proposed to transferred to model to an equivalent crisp one by the DMs's degree of optimism and the expected value index. Since the MODM model consider the bi-fuzzy environment and quantity discount policy, so this paper has a great practical significance.

  4. Object oriented business process modelling in RFID applied computing environment

    NARCIS (Netherlands)

    Zhao, X.; Liu, Chengfei; Lin, T.; Ranasinghe, D.C.; Sheng, Q.Z.

    2010-01-01

    As a tracking technology, Radio Frequency Identification (RFID) is now widely applied to enhance the context awareness of enterprise information systems. Such awareness provides great opportunities to facilitate business process automation and thereby improve operation efficiency and accuracy. With

  5. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  6. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    International Nuclear Information System (INIS)

    Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana

    2015-01-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  7. Creating patient value in glaucoma care : applying quality costing and care delivery value chain approaches

    NARCIS (Netherlands)

    D.F. de Korne (Dirk); J.C.A. Sol (Kees); T. Custers (Thomas); E. van Sprundel (Esther); B.M. van Ineveld (Martin); H.G. Lemij (Hans); N.S. Klazinga (Niek)

    2009-01-01

    textabstractPurpose: The purpose of this paper is to explore in a specific hospital care process the applicability in practice of the theories of quality costing and value chains. Design/methodology/approach: In a retrospective case study an in-depth evaluation of the use of a quality cost model

  8. A multicriteria decision making approach applied to improving maintenance policies in healthcare organizations.

    Science.gov (United States)

    Carnero, María Carmen; Gómez, Andrés

    2016-04-23

    Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a

  9. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  10. Experiences in applying Bayesian integrative models in interdisciplinary modeling: the computational and human challenges

    DEFF Research Database (Denmark)

    Kuikka, Sakari; Haapasaari, Päivi Elisabet; Helle, Inari

    2011-01-01

    We review the experience obtained in using integrative Bayesian models in interdisciplinary analysis focusing on sustainable use of marine resources and environmental management tasks. We have applied Bayesian models to both fisheries and environmental risk analysis problems. Bayesian belief...... be time consuming and research projects can be difficult to manage due to unpredictable technical problems related to parameter estimation. Biology, sociology and environmental economics have their own scientific traditions. Bayesian models are becoming traditional tools in fisheries biology, where...

  11. On the choice of electromagnetic model for short high-intensity arcs, applied to welding

    International Nuclear Information System (INIS)

    Choquet, Isabelle; Shirvan, Alireza Javidi; Nilsson, Håkan

    2012-01-01

    We have considered four different approaches for modelling the electromagnetic fields of high-intensity electric arcs: (i) three-dimensional, (ii) two-dimensional axi-symmetric, (iii) the electric potential formulation and (iv) the magnetic field formulation. The underlying assumptions and the differences between these models are described in detail. Models (i) to (iii) reduce to the same limit for an axi-symmetric configuration with negligible radial current density, contrary to model (iv). Models (i) to (iii) were retained and implemented in the open source CFD software OpenFOAM. The simulation results were first validated against the analytic solution of an infinite electric rod. Perfect agreement was obtained for all the models tested. The electromagnetic models (i) to (iii) were then coupled with thermal fluid mechanics, and applied to axi-symmetric gas tungsten arc welding test cases with short arc (2, 3 and 5 mm) and truncated conical electrode tip. Models (i) and (ii) lead to the same simulation results, but not model (iii). Model (iii) is suited in the specific limit of long axi-symmetric arc with negligible electrode tip effect, i.e. negligible radial current density. For short axi-symmetric arc with significant electrode tip effect, the more general axi-symmetric formulation of model (ii) should instead be used. (paper)

  12. Evaluation model applied to TRANSPETRO's Marine Terminals Standardization Program

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de; Mueller, Gabriela [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Garcia, Luciano Maldonado [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes an innovative evaluation model applied to TRANSPETRO's 'Marine Terminals Standardization Program' based on updating approaches of programs evaluation and organizational learning. Since the program was launched in 2004, the need for having an evaluation model able to evaluate its implementation progress, to measure the degree of standards compliance and its potential economic, social and environmental impacts has become evident. Within a vision of safe and environmentally responsible operations of marine terminals, this evaluation model was jointly designed by TRANSPETRO and PUC-Rio to promote continuous improvement and learning in operational practices and in the standardization process itself. TRANSPETRO believes that standardization supports its services and management innovation capability by creating objective and internationally recognized parameters, targets and metrology for its business activities. The conceptual model and application guidelines for this important tool are presented in this paper, as well as the next steps towards its implementation. (author)

  13. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  14. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  15. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  16. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.

    Science.gov (United States)

    van Lent, Wineke A M; Goedbloed, N; van Harten, W H

    2009-03-01

    To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.

  18. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  19. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  20. Applying a synthetic approach to the resilience of Finnish reindeer herding as a changing livelihood

    Directory of Open Access Journals (Sweden)

    Simo Sarkki

    2016-12-01

    Full Text Available Reindeer herding is an emblematic livelihood for Northern Finland, culturally important for local people and valuable in tourism marketing. We examine the livelihood resilience of Finnish reindeer herding by narrowing the focus of general resilience on social-ecological systems (SESs to a specific livelihood while also acknowledging wider contexts in which reindeer herding is embedded. The questions for specified resilience can be combined with the applied DPSIR approach (Drivers; Pressures: resilience to what; State: resilience of what; Impacts: resilience for whom; Responses: resilience by whom and how. This paper is based on a synthesis of the authors' extensive anthropological fieldwork on reindeer herding and other land uses in Northern Finland. Our objective is to synthesize various opportunities and challenges that underpin the resilience of reindeer herding as a viable livelihood. The DPSIR approach, applied here as a three step procedure, helps focus the analysis on different components of SES and their dynamic interactions. First, various land use-related DPSIR factors and their relations (synergies and trade-offs to reindeer herding are mapped. Second, detailed DPSIR factors underpinning the resilience of reindeer herding are identified. Third, examples of interrelations between DPSIR factors are explored, revealing the key dynamics between Pressures, State, Impacts, and Responses related to the livelihood resilience of reindeer herding. In the Discussion section, we recommend that future applications of the DPSIR approach in examining livelihood resilience should (1 address cumulative pressures, (2 consider the state dimension as more tuned toward the social side of SES, (3 assess both the negative and positive impacts of environmental change on the examined livelihood by a combination of science led top-down and participatory bottom-up approaches, and (4 examine and propose governance solutions as well as local adaptations by

  1. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  2. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  3. Adequateness of applying the Zmijewski model on Serbian companies

    Directory of Open Access Journals (Sweden)

    Pavlović Vladan

    2012-12-01

    Full Text Available The aim of the paper is to determine the accuracy of the prediction of Zmijewski model in Serbia on the eligible sample. At the same time, the paper identifies model's strengths, weaknesses and limitations of its possible application. Bearing in mind that the economic environment in Serbia is not similar to the United States at the time the model was developed, Zmijewski model is surprisingly accurate in the case of Serbian companies. The accuracy was slightly weaker than the model results in the U.S. in its original form, but much better than the results model gave in the U.S. in the period 1988-1991, and 1992-1999. Model gave also better results in Serbia comparing those in Croatia, even in Croatia model was adjusted.

  4. Comparison of two anisotropic layer models applied to induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Boynov, K.O.; Waarma, J.; Lomonova, E.

    2013-01-01

    A general description of the Anisotropic Layer Theory, derived in the polar coordinate system, and applied to the analysis of squirrel-cage induction motors (IMs), is presented. The theory considers non-conductive layers, layer with predefined current density and layers with induced current density.

  5. Applying the Job Characteristics Model to the College Education Experience

    Science.gov (United States)

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  6. Process Modeling Applied to Metal Forming and Thermomechanical Processing

    Science.gov (United States)

    1984-09-01

    measured (Lloyd & Kenny, 1982 and Kohara & Katsuta, 1978), The interpretation of these relations are qualitative at this stage (Lloyd et al. (1978...34, Applied Science Publishers, London l.elly,P.N. (1971) J . Aus t . Ins t .Me t a 1 s , 1_6, 104. Kohara , S. and Katsuta, M. (1978) J . Ja p . In

  7. Comparison of two anisotropic layer models applied to induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Boynov, K.O.; Lomonova, E.A.; Waarma, J.

    2014-01-01

    A general description of the Anisotropic Layer Theory, derived in the polar coordinate system, and applied to the analysis of squirrel-cage induction motors (IMs), is presented. The theory considers non-conductive layers, layer with predefined current density and layers with induced current density.

  8. Applying theory-driven approaches to understanding and modifying clinicians' behavior: what do we know?

    Science.gov (United States)

    Perkins, Matthew B; Jensen, Peter S; Jaccard, James; Gollwitzer, Peter; Oettingen, Gabriele; Pappadopulos, Elizabeth; Hoagwood, Kimberly E

    2007-03-01

    Despite major recent research advances, large gaps exist between accepted mental health knowledge and clinicians' real-world practices. Although hundreds of studies have successfully utilized basic behavioral science theories to understand, predict, and change patients' health behaviors, the extent to which these theories-most notably the theory of reasoned action (TRA) and its extension, the theory of planned behavior (TPB)-have been applied to understand and change clinician behavior is unclear. This article reviews the application of theory-driven approaches to understanding and changing clinician behaviors. MEDLINE and PsycINFO databases were searched, along with bibliographies, textbooks on health behavior or public health, and references from experts, to find article titles that describe theory-driven approaches (TRA or TPB) to understanding and modifying health professionals' behavior. A total of 19 articles that detailed 20 studies described the use of TRA or TPB and clinicians' behavior. Eight articles describe the use of TRA or TPB with physicians, four relate to nurses, three relate to pharmacists, and two relate to health workers. Only two articles applied TRA or TPB to mental health clinicians. The body of work shows that different constructs of TRA or TPB predict intentions and behavior among different groups of clinicians and for different behaviors and guidelines. The number of studies on this topic is extremely limited, but they offer a rationale and a direction for future research as well as a theoretical basis for increasing the specificity and efficiency of clinician-targeted interventions.

  9. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang Xinxin [Harbin Engineering University, Harbin (China)

    2014-08-15

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented.

  10. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  11. Nonlinear models applied to seed germination of Rhipsalis cereuscula Haw (Cactaceae

    Directory of Open Access Journals (Sweden)

    Terezinha Aparecida Guedes

    2014-09-01

    Full Text Available The objective of this analysis was to fit germination data of Rhipsalis cereuscula Haw seeds to the Weibull model with three parameters using Frequentist and Bayesian methods. Five parameterizations were compared using the Bayesian analysis to fit a prior distribution. The parameter estimates from the Frequentist method were similar to the Bayesian responses considering the following non-informative a priori distribution for the parameter vectors: gamma (10³, 10³ in the model M1, normal (0, 106 in the model M2, uniform (0, Lsup in the model M3, exp (μ in the model M4 and Lnormal (μ, 106 in the model M5. However, to achieve the convergence in the models M4 and M5, we applied the μ from the estimates of the Frequentist approach. The best models fitted by the Bayesian method were the M1 and M3. The adequacy of these models was based on the advantages over the Frequentist method such as the reduced computational efforts and the possibility of comparison.

  12. Applying Model Checking to Industrial-Sized PLC Programs

    CERN Document Server

    AUTHOR|(CDS)2079190; Darvas, Daniel; Blanco Vinuela, Enrique; Tournier, Jean-Charles; Bliudze, Simon; Blech, Jan Olaf; Gonzalez Suarez, Victor M

    2015-01-01

    Programmable logic controllers (PLCs) are embedded computers widely used in industrial control systems. Ensuring that a PLC software complies with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of safety-critical software but is still underused in industry due to the complexity of building and managing formal models of real applications. In this paper, we propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (\\eg CTL, LTL) on PLC programs. This methodology is based on an intermediate model (IM), meant to transform PLC programs written in various standard languages (ST, SFC, etc.) to different modeling languages of verification tools. We present the syntax and semantics of the IM and the transformation rules of the ST and SFC languages to the nuXmv model checker passing through the intermediate model. Finally, two real cases studies of \\CERN PLC programs, written mainly in th...

  13. Polarimetric SAR interferometry applied to land ice: modeling

    DEFF Research Database (Denmark)

    Dall, Jørgen; Papathanassiou, Konstantinos; Skriver, Henning

    2004-01-01

    This paper introduces a few simple scattering models intended for the application of polarimetric SAR interfer-ometry to land ice. The principal aim is to eliminate the penetration bias hampering ice sheet elevation maps generated with single-channel SAR interferometry. The polarimetric coherent...... scattering models are similar to the oriented-volume model and the random-volume-over-ground model used in vegetation studies, but the ice models are adapted to the different geometry of land ice. Also, due to compaction, land ice is not uniform; a fact that must be taken into account for large penetration...... depths. The validity of the scattering models is examined using L-band polarimetric interferometric SAR data acquired with the EMISAR system over an ice cap located in the percolation zone of the Greenland ice sheet. Radar reflectors were deployed on the ice surface prior to the data acquisition in order...

  14. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  15. Verification of short lead time forecast models: applied to Kp and Dst forecasting

    Science.gov (United States)

    Wintoft, Peter; Wik, Magnus

    2016-04-01

    In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.

  16. Applied exposure modeling for residual radioactivity and release criteria

    International Nuclear Information System (INIS)

    Lee, D.W.

    1989-01-01

    The protection of public health and the environment from the release of materials with residual radioactivity for recycle or disposal as wastes without radioactive contents of concern presents a formidable challenge. Existing regulatory criteria are based on technical judgment concerning detectability and simple modeling. Recently, exposure modeling methodologies have been developed to provide a more consistent level of health protection. Release criteria derived from the application of exposure modeling methodologies share the same basic elements of analysis but are developed to serve a variety of purposes. Models for the support of regulations for all applications rely on conservative interpretations of generalized conditions while models developed to show compliance incorporate specific conditions not likely to be duplicated at other sites. Research models represent yet another type of modeling which strives to simulate the actual behavior of released material. In spite of these differing purposes, exposure modeling permits the application of sound and reasoned principles of radiation protection to the release of materials with residual levels of radioactivity. Examples of the similarities and differences of these models are presented and an application to the disposal of materials with residual levels of uranium contamination is discussed. 5 refs., 2 tabs

  17. A Hybrid Approach to the Valuation of RFID/MEMS technology applied to ordnance inventory

    OpenAIRE

    Doerr, Kenneth H.; Gates, William R.; Mutty, John E.

    2006-01-01

    We report on an analysis of the costs and benefits of fielding Radio Frequency Identification / MicroElectroMechanical System (RFID /MEMS) technology for the management of ordnance inventory. A factorial model of these benefits is proposed. Our valuation approach combines a multi-criteria tool for the valuation of qualitative factors with a monte-carlo simulation of anticipated financial factors. In a sample survey, qualitative factors are shown to account of over half of the anticipated bene...

  18. Applying ecological models to communities of genetic elements: the case of neutral theory.

    Science.gov (United States)

    Linquist, Stefan; Cottenie, Karl; Elliott, Tyler A; Saylor, Brent; Kremer, Stefan C; Gregory, T Ryan

    2015-07-01

    A promising recent development in molecular biology involves viewing the genome as a mini-ecosystem, where genetic elements are compared to organisms and the surrounding cellular and genomic structures are regarded as the local environment. Here, we critically evaluate the prospects of ecological neutral theory (ENT), a popular model in ecology, as it applies at the genomic level. This assessment requires an overview of the controversy surrounding neutral models in community ecology. In particular, we discuss the limitations of using ENT both as an explanation of community dynamics and as a null hypothesis. We then analyse a case study in which ENT has been applied to genomic data. Our central finding is that genetic elements do not conform to the requirements of ENT once its assumptions and limitations are made explicit. We further compare this genome-level application of ENT to two other, more familiar approaches in genomics that rely on neutral mechanisms: Kimura's molecular neutral theory and Lynch's mutational-hazard model. Interestingly, this comparison reveals that there are two distinct concepts of neutrality associated with these models, which we dub 'fitness neutrality' and 'competitive neutrality'. This distinction helps to clarify the various roles for neutral models in genomics, for example in explaining the evolution of genome size. © 2015 John Wiley & Sons Ltd.

  19. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  20. The Intensive Dysphagia Rehabilitation Approach Applied to Patients With Neurogenic Dysphagia: A Case Series Design Study.

    Science.gov (United States)

    Malandraki, Georgia A; Rajappa, Akila; Kantarcigil, Cagla; Wagner, Elise; Ivey, Chandra; Youse, Kathleen

    2016-04-01

    To examine the effects of the Intensive Dysphagia Rehabilitation approach on physiological and functional swallowing outcomes in adults with neurogenic dysphagia. Intervention study; before-after trial with 4-week follow-up through an online survey. Outpatient university clinics. A consecutive sample of subjects (N=10) recruited from outpatient university clinics. All subjects were diagnosed with adult-onset neurologic injury or disease. Dysphagia diagnosis was confirmed through clinical and endoscopic swallowing evaluations. No subjects withdrew from the study. Participants completed the 4-week Intensive Dysphagia Rehabilitation protocol, including 2 oropharyngeal exercise regimens, a targeted swallowing routine using salient stimuli, and caregiver participation. Treatment included hourly sessions twice per week and home practice for approximately 45 min/d. Outcome measures assessed pre- and posttreatment included airway safety using an 8-point Penetration Aspiration Scale, lingual isometric pressures, self-reported swallowing-related quality of life (QOL), and level of oral intake. Also, patients were monitored for adverse dysphagia-related effects. QOL and adverse effects were also assessed at the 4-week follow-up (online survey). The Intensive Dysphagia Rehabilitation approach was effective in improving maximum and mean Penetration Aspiration Scale scores (PDysphagia Rehabilitation approach was safe and improved physiological and some functional swallowing outcomes in our sample; however, further investigation is needed before it can be widely applied. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  2. Surface-bounded growth modeling applied to human mandibles

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt; Brookstein, F. L.; Conradsen, Knut

    2000-01-01

    From a set of longitudinal three-dimensional scans of the same anatomical structure, the authors have accurately modeled the temporal shape and size changes using a linear shape model. On a total of 31 computed tomography scans of the mandible from six patients, 14,851 semilandmarks are found...

  3. An electricity billing model | Adetona | Journal of Applied Science ...

    African Journals Online (AJOL)

    Linear regression analysis has been employed to develop a model for predicting accurately the electricity billing for commercial consumers in Ogun State (Nigeria) at faster rate. The electricity billing model was implement-ed, executed and tested using embedded MATLAB function blocks. The correlations between the ...

  4. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  5. An approach using quantum ant colony optimization applied to the problem of nuclear reactors reload

    International Nuclear Information System (INIS)

    Silva, Marcio H.; Lima, Alan M.M. de; Schirru, Roberto; Medeiros, J.A.C.C.

    2009-01-01

    The basic concept behind the nuclear reactor fuel reloading problem is to find a configuration of new and used fuel elements, to keep the plant working at full power by the largest possible duration, within the safety restrictions. The main restriction is the power peaking factor, which is the limit value for the preservation of the fuel assembly. The QACO A lfa algorithm is a modified version of Quantum Ant Colony Optimization (QACO) proposed by Wang et al, which uses a new actualization method and a pseudo evaporation step. We examined the QACO A lfa behavior associated to physics of reactors code RECNOD when applied to this problem. Although the QACO have been developed for continuous functions, the binary model used in this work allows applying it to discrete problems, such as the mentioned above. (author)

  6. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  7. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  8. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  9. The Cheshire Cat principle applied to hybrid bag models

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Wirzba, A.

    1987-05-01

    Here is argued for the Cheshire Cat point of view according to which the bag (itself) has only notational, but no physical significance. It is explained in a 1+1 dimensional exact Cheshire Cat model how a fermion can escape from the bag by means of an anomaly. We also suggest that suitably constructed hybrid bag models may be used to fix such parameters of effective Lagrangians that can otherwise be obtained from experiments only. This idea is illustrated in a calculation of the mass of the pseudoscalar η' meson in 1+1 dimension. Thus there is hope to find a construction principle for a phenomenologically sensible model. (orig.)

  10. Trailing edge noise model applied to wind turbine airfoils

    Energy Technology Data Exchange (ETDEWEB)

    Bertagnolio, F.

    2008-01-15

    The aim of this work is firstly to provide a quick introduction to the theory of noise generation that are relevant to wind turbine technology with focus on trailing edge noise. Secondly, the socalled TNO trailing edge noise model developed by Parchen [1] is described in more details. The model is tested and validated by comparing with other results from the literature. Finally, this model is used in the optimization process of two reference airfoils in order to reduce their noise signature: the RISOE-B1-18 and the S809 airfoils. (au)

  11. NEURO-FUZZY MODELING APPLIED IN PROGRAM MANAGEMENT TO INCREASE LOCAL PUBLIC ADMINISTRATION PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Adrian-Mihai Zaharia-Radulescu

    2016-07-01

    Full Text Available One of the challenges in local public administration is dealing with an increasing number of competing requests coming from the communities they serve. The traditional approach would be to handle each request as a standalone project and be prioritized according to benefits and budget available. More and more nowadays program management is becoming a standard approach in managing the initiatives of local public administration. Program management approach is itself an enabler for performance in public sector organizations by allowing an organization to better coordinate its efforts and resources in managing a portfolio of projects. This paper aims to present how neuro-fuzzy modeling applied in program management can help an organization to increase its performance. Neuro-fuzzy modeling would lead organizations one step further by allowing them to simulate different scenarios and manage better the risks accompanying their initiatives. The research done by the authors is theoretical and combines knowledge from different areas and a neuro-fuzzy model is proposed and discussed.

  12. Skill-Based Approach Applied to Gifted Students, its Potential in Latin America

    Directory of Open Access Journals (Sweden)

    Andrew Alexi Almazán-Anaya

    2015-09-01

    Full Text Available This paper presents, as a reflective essay, the current educational situation of gifted students (with more intelligence than the average in Latin America and the possibility of using skill-based education within differentiated programs (intended for gifted individuals, a sector where scarce scientific studies have been done and a consensus of an ideal educative model has not been reached yet. Currently these students, in general, lack of specialized educational assistance intended to identify and develop their cognitive abilities, so it is estimated that a high percentage (95% of such population is not detected in the traditional education system. Although there are differentiated education models, they are rarely applied. A student-centered education program is a solution proposed to apply this pedagogical model and cover such population. The characteristics of this program that do support differentiated instruction for gifted individuals compatible with experiences in the US, Europe and Latin America are analyzed. Finally, this paper concludes with an analysis of possible research areas that, if explored in the future, would help us to find answers about the feasibility and relation between skill-based programs and differentiated education for gifted students.

  13. Lecturing and Loving It: Applying the Information-Processing Model.

    Science.gov (United States)

    Parker, Jonathan K.

    1993-01-01

    Discusses the benefits of lecturing, when done properly, in high schools. Describes the positive attributes of effective lecturers. Provides a human information-processing model applicable to the task of lecturing to students. (HB)

  14. Applying a Systems Approach to Monitoring and Assessing Climate Change Mitigation Potential in Mexico's Forest Sector

    Science.gov (United States)

    Olguin-Alvarez, M. I.; Wayson, C.; Fellows, M.; Birdsey, R.; Smyth, C.; Magnan, M.; Dugan, A.; Mascorro, V.; Alanís, A.; Serrano, E.; Kurz, W. A.

    2017-12-01

    Since 2012, the Mexican government through its National Forestry Commission, with support from the Commission for Environmental Cooperation, the Forest Services of Canada and USA, the SilvaCarbon Program and research institutes in Mexico, has made important progress towards the use of carbon dynamics models ("gain-loss" approach) for greenhouse gas (GHG) emissions monitoring and projections into the future. Here we assess the biophysical mitigation potential of policy alternatives identified by the Mexican Government (e.g. net zero deforestation rate, sustainable forest management) based on a systems approach that models carbon dynamics in forest ecosystems, harvested wood products and substitution benefits in two contrasting states of Mexico. We provide key messages and results derived from the use of the Carbon Budget Model of the Canadian Forest Sector and a harvested wood products model, parameterized with input data from Mexicós National Forest Monitoring System (e.g. forest inventories, remote sensing, disturbance data). The ultimate goal of this tri-national effort is to develop data and tools for carbon assessment in strategic landscapes in North America, emphasizing the need to include multiple sectors and types of collaborators (scientific and policy-maker communities) to design more comprehensive portfolios for climate change mitigation in accordance with the Paris Agreement of the United Nation Framework Convention on Climate Change (e.g. Mid-Century Strategy, NDC goals).

  15. Applying Time Series Analysis Model to Temperature Data in Greenhouses

    Directory of Open Access Journals (Sweden)

    Abdelhafid Hasni

    2011-03-01

    Full Text Available The objective of the research is to find an appropriate Seasonal Auto-Regressive Integrated Moving Average (SARIMA Model for fitting the inside air temperature (Tin of a naturally ventilated greenhouse under Mediterranean conditions by considering the minimum of Akaike Information Criterion (AIC. The results of fitting were as follows: the best SARIMA Model for fitting air temperature of greenhouse is SARIMA (1,0,0 (1,0,224.

  16. The J3 SCR model applied to resonant converter simulation

    Science.gov (United States)

    Avant, R. L.; Lee, F. C. Y.

    1985-01-01

    The J3 SCR model is a continuous topology computer model for the SCR. Its circuit analog and parameter estimation procedure are uniformly applicable to popular computer-aided design and analysis programs such as SPICE2 and SCEPTRE. The circuit analog is based on the intrinsic three pn junction structure of the SCR. The parameter estimation procedure requires only manufacturer's specification sheet quantities as a data base.

  17. The mapping approach in the path integral formalism applied to curve-crossing systems

    International Nuclear Information System (INIS)

    Novikov, Alexey; Kleinekathoefer, Ulrich; Schreiber, Michael

    2004-01-01

    The path integral formalism in a combined phase-space and coherent-state representation is applied to the problem of curve-crossing dynamics. The system of interest is described by two coupled one-dimensional harmonic potential energy surfaces interacting with a heat bath consisting of harmonic oscillators. The mapping approach is used to rewrite the Lagrangian function of the electronic part of the system. Using the Feynman-Vernon influence-functional method the bath is eliminated whereas the non-Gaussian part of the path integral is treated using the generating functional for the electronic trajectories. The dynamics of a Gaussian wave packet is analyzed along a one-dimensional reaction coordinate within a perturbative treatment for a small coordinate shift between the potential energy surfaces

  18. Pointing and the Evolution of Language: An Applied Evolutionary Epistemological Approach

    Directory of Open Access Journals (Sweden)

    Nathalie Gontier

    2013-07-01

    Full Text Available Numerous evolutionary linguists have indicated that human pointing behaviour might be associated with the evolution of language. At an ontogenetic level, and in normal individuals, pointing develops spontaneously and the onset of human pointing precedes as well as facilitates phases in speech and language development. Phylogenetically, pointing behaviour might have preceded and facilitated the evolutionary origin of both gestural and vocal language. Contrary to wild non-human primates, captive and human-reared nonhuman primates also demonstrate pointing behaviour. In this article, we analyse the debates on pointing and its role it might have played in language evolution from a meta-level. From within an Applied Evolutionary Epistemological approach, we examine how exactly we can determine whether pointing has been a unit, a level or a mechanism in language evolution.

  19. GIS-Based Population Model Applied to Nevada Transportation Routes

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.

    1999-01-01

    Recently, a model based on geographic information system (GIS) processing of US Census Block data has made high-resolution population analysis for transportation risk analysis technically and economically feasible. Population density bordering each kilometer of a route may be tabulated with specific route sections falling into each of three categories (Rural, Suburban or Urban) identified for separate risk analysis. In addition to the improvement in resolution of Urban areas along a route, the model provides a statistically-based correction to population densities in Rural and Suburban areas where Census Block dimensions may greatly exceed the 800-meter scale of interest. A semi-automated application of the GIS model to a subset of routes in Nevada (related to the Yucca Mountain project) are presented, and the results compared to previous models including a model based on published Census and other data. These comparisons demonstrate that meaningful improvement in accuracy and specificity of transportation risk analyses is dependent on correspondingly accurate and geographically-specific population density data

  20. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  1. Hidden multidimensional social structure modeling applied to biased social perception

    Science.gov (United States)

    Maletić, Slobodan; Zhao, Yi

    2018-02-01

    Intricacies of the structure of social relations are realized by representing a collection of overlapping opinions as a simplicial complex, thus building latent multidimensional structures, through which agents are, virtually, moving as they exchange opinions. The influence of opinion space structure on the distribution of opinions is demonstrated by modeling consensus phenomena when the opinion exchange between individuals may be affected by the false consensus effect. The results indicate that in the cases with and without bias, the road toward consensus is influenced by the structure of multidimensional space of opinions, and in the biased case, complete consensus is achieved. The applications of proposed modeling framework can easily be generalized, as they transcend opinion formation modeling.

  2. Modeling the microstructure of surface by applying BRDF function

    Science.gov (United States)

    Plachta, Kamil

    2017-06-01

    The paper presents the modeling of surface microstructure using a bidirectional reflectance distribution function. This function contains full information about the reflectance properties of the flat surfaces - it is possible to determine the share of the specular, directional and diffuse components in the reflected luminous stream. The software is based on the authorial algorithm that uses selected elements of this function models, which allows to determine the share of each component. Basing on obtained data, the surface microstructure of each material can be modeled, which allows to determine the properties of this materials. The concentrator directs the reflected solar radiation onto the photovoltaic surface, increasing, at the same time, the value of the incident luminous stream. The paper presents an analysis of selected materials that can be used to construct the solar concentrator system. The use of concentrator increases the power output of the photovoltaic system by up to 17% as compared to the standard solution.

  3. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  4. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  5. Computational modeling applied to stress gradient analysis for metallic alloys

    International Nuclear Information System (INIS)

    Iglesias, Susana M.; Assis, Joaquim T. de; Monine, Vladimir I.

    2009-01-01

    Nowadays composite materials including materials reinforced by particles are the center of the researcher's attention. There are problems with the stress measurements in these materials, connected with the superficial stress gradient caused by the difference of the stress state of particles on the surface and in the matrix of the composite material. Computer simulation of diffraction profile formed by superficial layers of material allows simulate the diffraction experiment and gives the possibility to resolve the problem of stress measurements when the stress state is characterized by strong gradient. The aim of this paper is the application of computer simulation technique, initially developed for homogeneous materials, for diffraction line simulation of composite materials and alloys. Specifically we applied this technique for siluminum fabricated by powder metallurgy. (author)

  6. Modeling of hydrothermal circulation applied to active volcanic areas. The case of Vulcano (Italy)

    Energy Technology Data Exchange (ETDEWEB)

    Todesco, M. [Dip. Scienze della Terra, Posa (Italy)

    1995-03-01

    Modeling of fluid and heat flows through porous media has been diffusely applied up to date to the study of geothermal reservoirs. Much less has been done to apply the same methodology to the study of active volcanoes and of the associated volcanic hazard. Hydrothermal systems provide direct information on dormant eruptive centers and significant insights on their state of activity and current evolution. For this reason, the evaluation of volcanic hazard is also based on monitoring of hydrothermal activity. Such monitoring, however, provides measurements of surface parameters, such as fluid temperature or composition, that often are only representative of the shallower portion of the system. The interpretation of these data in terms of global functioning of the hydrothermal circulation can therefore be highly misleading. Numerical modeling of hydrothermal activity provides a physical approach to the description of fluid circulation and can contribute to its understanding and to the interpretation of monitoring data. In this work, the TOUGH2 simulator has been applied to study the hydrothermal activity at Vulcano (Italy). Simulations involved an axisymmetric domain heated from below, and focused on the effects of permeability distribution and carbon dioxide. Results are consistent with the present knowledge of the volcanic system and suggest that permeability distribution plays a major role in the evolution of fluid circulation. This parameter should be considered in the interpretation of monitoring data and in the evaluation of volcanic hazard at Vulcano.

  7. Does having the right visitor mix do the job? Applying an econometric shift-share model to regional tourism developments

    OpenAIRE

    Firgo, Matthias; Fritz, Oliver

    2016-01-01

    This paper is the first to apply an econometric shift-share model to tourism. The approach allows us to isolate the growth contributions of changes in regional touristic attractiveness from those induced by the structure of visitors, but does not share the caveats of the conventional shift-share approach. Our application to regional tourism in Austria reveals important results: First, differences in long-run performance between regions are mostly related to idiosyncratic changes in the touris...

  8. Applying an Employee-Motivation Model to Prevent Student Plagiarism.

    Science.gov (United States)

    Malouff, John M.; Sims, Randi L.

    1996-01-01

    A model based on Vroom's expectancy theory of employee motivation posits that instructors can prevent plagiarism by ensuring that students understand the rules of ethical writing, expect assignments to be manageable and have personal benefits, and expect plagiarism to be difficult and have important personal costs. (SK)

  9. Applying the knowledge creation model to the management of ...

    African Journals Online (AJOL)

    In present-day society, the need to manage indigenous knowledge is widely recognised. However, there is a debate in progress on whether or not indigenous knowledge can be easily managed. The purpose of this paper is to examine the possibility of using knowledge management models like knowledge creation theory ...

  10. Robust model identification applied to type 1diabetes

    DEFF Research Database (Denmark)

    Finan, Daniel Aaron; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2010-01-01

    In many realistic applications, process noise is known to be neither white nor normally distributed. When identifying models in these cases, it may be more effective to minimize a different penalty function than the standard sum of squared errors (as in a least-squares identification method). Thi...

  11. Dynamics Model Applied to Pricing Options with Uncertain Volatility

    Directory of Open Access Journals (Sweden)

    Lorella Fatone

    2012-01-01

    model is proposed. The data used to test the calibration problem included observations of asset prices over a finite set of (known equispaced discrete time values. Statistical tests were used to estimate the statistical significance of the two parameters of the Black-Scholes model: the volatility and the drift. The effects of these estimates on the option pricing problem were investigated. In particular, the pricing of an option with uncertain volatility in the Black-Scholes framework was revisited, and a statistical significance was associated with the price intervals determined using the Black-Scholes-Barenblatt equations. Numerical experiments involving synthetic and real data were presented. The real data considered were the daily closing values of the S&P500 index and the associated European call and put option prices in the year 2005. The method proposed here for calibrating the Black-Scholes dynamics model could be extended to other science and engineering models that may be expressed in terms of stochastic dynamical systems.

  12. Leadership Identity Development: Challenges in Applying a Developmental Model

    Science.gov (United States)

    Komives, Susan R.; Longerbeam, Susan D.; Mainella, Felicia; Osteen, Laura; Owen, Julie E.; Wagner, Wendy

    2009-01-01

    The leadership identity development (LID) grounded theory (Komives, Owen, Longerbeam, Mainella, & Osteen, 2005) and related LID model (Komives, Longerbeam, Owen, Mainella, & Osteen, 2006) present a framework for understanding how individual college students develop the social identity of being collaborative, relational leaders…

  13. Applying the elastic model for various nucleus-nucleus fusion

    International Nuclear Information System (INIS)

    HASSAN, G.S.; RAGAB, H.S.; SEDDEEK, M.K.

    2000-01-01

    The Elastic Model of two free parameters m,d given by Scalia has been used for wider energy regions to fit the available experimental data for potential barriers and cross sections. In order to generalize Scalia's formula in both sub- and above-barrier regions, we calculated m, d for pairs rather than those given by Scalia and compared the calculated cross sections with the experimental data. This makes a generalization of the Elastic Model in describing fusion process. On the other hand, Scalia's range of interacting systems was 24 ≤ A ≤194 where A is the compound nucleus mass number. Our extension of that model includes an example of the pairs of A larger than his final limit aiming to make it as a general formula for any type of reactants: light, intermediate or heavy systems. A significant point is the comparison of Elastic Model calculations with the well known methods studying complete fusion and compound nucleus formation, namely with the resultants of using Proximity potential with either Sharp or Smooth cut-off approximations

  14. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  15. Agent-based modelling in applied ethology: an exploratory case study of behavioural dynamics in tail biting in pigs

    NARCIS (Netherlands)

    Boumans, I.J.M.M.; Hofstede, G.J.; Bolhuis, J.E.; Boer, de I.J.M.; Bokkers, E.A.M.

    2016-01-01

    Understanding behavioural dynamics in pigs is important to assess pig welfare in current intensive pig production systems. Agent-based modelling (ABM) is an approach to gain insight into behavioural dynamics in pigs, but its use in applied ethology and animal welfare science has been limited so far.

  16. Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden

    International Nuclear Information System (INIS)

    Hill, M.

    2001-01-01

    This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework

  17. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  18. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Directory of Open Access Journals (Sweden)

    Simon T Maddock

    Full Text Available Mitochondrial genome (mitogenome sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent to produce seven (near- complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  19. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Science.gov (United States)

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  20. Fatigue damage approach applied to Li-ion batteries ageing characterization

    Energy Technology Data Exchange (ETDEWEB)

    Dudézert, C. [Renault, Technocentre, Guyancourt (France); Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France); CEA/LITEN, Grenoble (France); Reynier, Y. [CEA/LITEN, Grenoble (France); Duffault, J.-M. [Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France); Franger, S., E-mail: sylvain.franger@u-psud.fr [Université Paris Sud/Université Paris-Saclay, ICMMO (UMR CNRS 8182), Orsay (France)

    2016-11-15

    Reliability of energy storage devices is one of the foremost concerns in electric vehicles (EVs) development. Battery ageing, i.e. the degradation of battery energy and power, depends mainly on time, on the environmental conditions and on the in-use solicitations endured by the storage system. In case of EV, the heavy dependence of the battery use with the car performance, the driving cycles, and the weather conditions make the battery life prediction an intricate issue. Mechanical physicists have developed a quick and exhaustive methodology to diagnose reliability of complex structures enduring complex loads. This “fatigue” approach expresses the performance fading due to a complex load through the evolution corresponding to basic ones. Thus, a state of health variable named “damage” binds the load history and ageing. The battery ageing study described here consists in applying this mechanical approach to electrochemical systems by connecting the ageing factors with the battery characteristics evolutions. In that way, a specific “fatigue” test protocol has been established. This experimental confrontation has led to distinguishing calendar from cycling ageing mechanisms.

  1. Fatigue damage approach applied to Li-ion batteries ageing characterization

    International Nuclear Information System (INIS)

    Dudézert, C.; Reynier, Y.; Duffault, J.-M.; Franger, S.

    2016-01-01

    Reliability of energy storage devices is one of the foremost concerns in electric vehicles (EVs) development. Battery ageing, i.e. the degradation of battery energy and power, depends mainly on time, on the environmental conditions and on the in-use solicitations endured by the storage system. In case of EV, the heavy dependence of the battery use with the car performance, the driving cycles, and the weather conditions make the battery life prediction an intricate issue. Mechanical physicists have developed a quick and exhaustive methodology to diagnose reliability of complex structures enduring complex loads. This “fatigue” approach expresses the performance fading due to a complex load through the evolution corresponding to basic ones. Thus, a state of health variable named “damage” binds the load history and ageing. The battery ageing study described here consists in applying this mechanical approach to electrochemical systems by connecting the ageing factors with the battery characteristics evolutions. In that way, a specific “fatigue” test protocol has been established. This experimental confrontation has led to distinguishing calendar from cycling ageing mechanisms.

  2. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  3. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  4. A statistical method for model extraction and model selection applied to the temperature scaling of the L–H transition

    International Nuclear Information System (INIS)

    Peluso, E; Gelfusa, M; Gaudio, P; Murari, A

    2014-01-01

    Access to the H mode of confinement in tokamaks is characterized by an abrupt transition, which has been the subject of continuous investigation for decades. Various theoretical models have been developed and multi-machine databases of experimental data have been collected. In this paper, a new methodology is reviewed for the investigation of the scaling laws for the temperature threshold to access the H mode. The approach is based on symbolic regression via genetic programming and allows first the extraction of the most statistically reliable models from the available experimental data. Nonlinear fitting is then applied to the mathematical expressions found by symbolic regression; this second step permits to easily compare the quality of the data-driven scalings with the most widely accepted theoretical models. The application of a complete set of statistical indicators shows that the data-driven scaling laws are qualitatively better than the theoretical models. The main limitations of the theoretical models are that they are all expressed as power laws, which are too rigid to fit the available experimental data and to extrapolate to ITER. The proposed method is absolutely general and can be applied to the extraction or scaling law from any experimental database of sufficient statistical relevance. (paper)

  5. Motor fuel demand analysis - applied modelling in the European union

    International Nuclear Information System (INIS)

    Chorazewiez, S.

    1998-01-01

    Motor fuel demand in Europe amounts to almost half of petroleum products consumption and to thirty percent of total final energy consumption. This study considers, Firstly, the energy policies of different European countries and the ways in which the consumption of motor gasoline and automotive gas oil has developed. Secondly it provides an abstract of demand models in the energy sector, illustrating their specific characteristics. Then it proposes an economic model of automotive fuel consumption, showing motor gasoline and automotive gas oil separately over a period of thirty years (1960-1993) for five main countries in the European Union. Finally, forecasts of consumption of gasoline and diesel up to the year 2020 are given for different scenarios. (author)

  6. APPLYING PETRI NETS EXTENSIONS TO MODELING COMMERCIAL BANK ACTIVITY

    Directory of Open Access Journals (Sweden)

    Igor ENICOV

    2017-02-01

    Full Text Available The relevance of the study is determined by the need to improve the methods of modeling andsimulating commercial bank activity, including for the purpose of calculating, controlling and managingthe risk of the bank, in the context of the transition to the application of Basel III standards. Thisimprovement becomes necessary due to a direct transition to new regulatory standards when the internalassessments of the main risks become the initial data for calculating the capital adequacy of a bank. Thepurpose of this article is to argue the opportunity to formulate a theory of the commercial bank model onthe extensions of Petri nets theory. The main methods of research were the method of scientific abstractionand method of logical analysis. The main result obtained in the study and presented in the article is theargumentation of the possibility to analyze the quantitative and qualitative characteristics of acommercial bank with the help of Petri net extensions.

  7. Alexandrium minutum growth controlled by phosphorus An applied model

    OpenAIRE

    Chapelle, Annie; Labry, Claire; Sourisseau, Marc; Lebreton, Carole; Youenou, Agnes; Crassous, Marie-pierre

    2010-01-01

    Toxic algae are a worldwide problem threatening aquaculture public health and tourism Alexandrium a toxic dinoflagellate proliferates in Northwest France estuaries (i e the Penze estuary) causing Paralytic Shellfish Poisoning events Vegetative growth and in particular the role of nutrient uptake and growth rate are crucial parameters to understand toxic blooms With the goal of modelling in situ Alexandrium blooms related to environmental parameters we first try to calibrate a zero-dimensional...

  8. Applying Mechanistic Dam Breach Models to Historic Levee Breaches

    OpenAIRE

    Risher Paul; Gibson Stanford

    2016-01-01

    Hurricane Katrina elevated levee risk in the US national consciousness, motivating agencies to assess and improve their levee risk assessment methodology. Accurate computation of the flood flow magnitude and timing associated with a levee breach remains one of the most difficult and uncertain components of levee risk analysis. Contemporary methods are largely empirical and approximate, introducing substantial uncertainty to the damage and life loss models. Levee breach progressions are often ...

  9. Applying CIPP Model for Learning-Object Management

    Science.gov (United States)

    Morgado, Erla M. Morales; Peñalvo, Francisco J. García; Martín, Carlos Muñoz; Gonzalez, Miguel Ángel Conde

    Although knowledge management process needs to receive some evaluation in order to determine their suitable functionality. There is not a clear definition about the stages where LOs need to be evaluated and the specific metrics to continuously promote their quality. This paper presents a proposal for LOs evaluation during their management for e-learning systems. To achieve this, we suggest specific steps for LOs design, implementation and evaluation into the four stages proposed by CIPP model (Context, Input, Process, Product).

  10. Geometry Based Design Automation : Applied to Aircraft Modelling and Optimization

    OpenAIRE

    Amadori, Kristian

    2012-01-01

    Product development processes are continuously challenged by demands for increased efficiency. As engineering products become more and more complex, efficient tools and methods for integrated and automated design are needed throughout the development process. Multidisciplinary Design Optimization (MDO) is one promising technique that has the potential to drastically improve concurrent design. MDO frameworks combine several disciplinary models with the aim of gaining a holistic perspective of ...

  11. Modeling a Thermoelectric Generator Applied to Diesel Automotive Heat Recovery

    Science.gov (United States)

    Espinosa, N.; Lazard, M.; Aixala, L.; Scherrer, H.

    2010-09-01

    Thermoelectric generators (TEGs) are outstanding devices for automotive waste heat recovery. Their packaging, lack of moving parts, and direct heat to electrical conversion are the main benefits. Usually, TEGs are modeled with a constant hot-source temperature. However, energy in exhaust gases is limited, thus leading to a temperature decrease as heat is recovered. Therefore thermoelectric properties change along the TEG, affecting performance. A thermoelectric generator composed of Mg2Si/Zn4Sb3 for high temperatures followed by Bi2Te3 for low temperatures has been modeled using engineering equation solver (EES) software. The model uses the finite-difference method with a strip-fins convective heat transfer coefficient. It has been validated on a commercial module with well-known properties. The thermoelectric connection and the number of thermoelements have been addressed as well as the optimum proportion of high-temperature material for a given thermoelectric heat exchanger. TEG output power has been estimated for a typical commercial vehicle at 90°C coolant temperature.

  12. Applying a Virtual Economy Model in Mexico's Oil Sector

    International Nuclear Information System (INIS)

    Baker, G.

    1994-01-01

    The state of Mexico's oil industry, including the accomplishments of Pemex, Mexico's national oil company, was discussed, with particular reference to the progress made in the period of 1988-1994, and the outlook for innovations in the post-Salinas era. The concept of an evolutionary trend from a command economy (State as sole producer), towards market (State as regulator) or mixed economies (State as business partner) in developing countries, was introduced, placing Pemex within this evolutionary model as moving away from centralized control of oil production and distribution, while achieving international competitiveness. The concept of ''virtual market economy'' was also discussed. This model contains the legal basis of a command economy, while instituting modernization programs in order to stimulate market-economic conditions. This type of economy was considered particularly useful in this instance, sine it would allow Pemex units to operate within international performance and price benchmarks while maintaining state monopoly. Specific details of how Pemex could transform itself to a virtual market economy were outlined. It was recommended that Pemex experiment with the virtual mixed economy model; in essence, making the state a co-producer, co-transporter, and co-distributor of hydrocarbons. The effects of such a move would be to bring non-debt funding to oil and gas production, transmission, and associated industrial activities

  13. A GOMS model applied to a simplified control panel design

    International Nuclear Information System (INIS)

    Chavez, C.; Edwards, R.M.

    1992-01-01

    The design of the user interface for a new system requires many decisions to be considered. To develop sensitivity to user needs requires understanding user behavior. The how-to-do-it knowledge is a mixture of task-related and interface-related components. A conscientious analysis of these components, allows the designer to construct a model in terms of goals, operators, methods, and selection (GOMS model) rules that can be advantageously used in the design process and evaluation of a user interface. The emphasis of the present work is on describing the importance and use of a GOMS model as a formal user interface analysis tool in the development of a simplified panel for the control of a nuclear power plant. At Pennsylvania State University, a highly automated control system with a greatly simplified human interface has been proposed to improve power plant safety. Supervisory control is to be conducted with a simplified control panel with the following functions: startup, shutdown, increase power, decrease power, reset, and scram. Initial programming of the operator interface has been initiated within the framework of a U.S. Department of Energy funded university project for intelligent distributed control. A hypothesis to be tested is that this scheme can be also used to estimate mental work load content and predict human performance

  14. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  15. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    Science.gov (United States)

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  16. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  17. Inverse geothermal modelling applied to Danish sedimentary basins

    DEFF Research Database (Denmark)

    Poulsen, Soren E.; Balling, Niels; Bording, Thue S.

    2017-01-01

    . The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature...... gradients to depths of 2000-3000 m are generally around 25-30. degrees C km(-1), locally up to about 35. degrees C km(-1). Large regions have geothermal reservoirs with characteristic temperatures ranging from ca. 40-50. degrees C, at 1000-1500 m depth, to ca. 80-110. degrees C, at 2500-3500 m, however...

  18. Modeling external constraints: Applying expert systems to nuclear plants

    International Nuclear Information System (INIS)

    Beck, C.E.; Behera, A.K.

    1993-01-01

    Artificial Intelligence (AI) applications in nuclear plants have received much attention over the past decade. Specific applications that have been addressed include development of models and knowledge-bases, plant maintenance, operations, procedural guidance, risk assessment, and design tools. This paper examines the issue of external constraints, with a focus on the use of Al and expert systems as design tools. It also provides several suggested methods for addressing these constraints within the Al framework. These methods include a State Matrix scheme, a layered structure for the knowledge base, and application of the dynamic parameter concept

  19. Automated parameter tuning applied to sea ice in a global climate model

    Science.gov (United States)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  20. FDTD-based Transcranial Magnetic Stimulation model applied to specific neurodegenerative disorders.

    Science.gov (United States)

    Fanjul-Vélez, Félix; Salas-García, Irene; Ortega-Quijano, Noé; Arce-Diego, José Luis

    2015-01-01

    Non-invasive treatment of neurodegenerative diseases is particularly challenging in Western countries, where the population age is increasing. In this work, magnetic propagation in human head is modelled by Finite-Difference Time-Domain (FDTD) method, taking into account specific characteristics of Transcranial Magnetic Stimulation (TMS) in neurodegenerative diseases. It uses a realistic high-resolution three-dimensional human head mesh. The numerical method is applied to the analysis of magnetic radiation distribution in the brain using two realistic magnetic source models: a circular coil and a figure-8 coil commonly employed in TMS. The complete model was applied to the study of magnetic stimulation in Alzheimer and Parkinson Diseases (AD, PD). The results show the electrical field distribution when magnetic stimulation is supplied to those brain areas of specific interest for each particular disease. Thereby the current approach entails a high potential for the establishment of the current underdeveloped TMS dosimetry in its emerging application to AD and PD. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-01-01

    Full Text Available Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire (n = 486 distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  2. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model.

    Science.gov (United States)

    Zuniga-Teran, Adriana A; Orr, Barron J; Gimblett, Randy H; Chalfoun, Nader V; Guertin, David P; Marsh, Stuart E

    2017-01-13

    Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire ( n = 486) distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation) representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  3. Applying the Health Belief Model to college students' health behavior

    Science.gov (United States)

    Kim, Hak-Seon; Ahn, Joo

    2012-01-01

    The purpose of this research was to investigate how university students' nutrition beliefs influence their health behavioral intention. This study used an online survey engine (Qulatrics.com) to collect data from college students. Out of 253 questionnaires collected, 251 questionnaires (99.2%) were used for the statistical analysis. Confirmatory Factor Analysis (CFA) revealed that six dimensions, "Nutrition Confidence," "Susceptibility," "Severity," "Barrier," "Benefit," "Behavioral Intention to Eat Healthy Food," and "Behavioral Intention to do Physical Activity," had construct validity; Cronbach's alpha coefficient and composite reliabilities were tested for item reliability. The results validate that objective nutrition knowledge was a good predictor of college students' nutrition confidence. The results also clearly showed that two direct measures were significant predictors of behavioral intentions as hypothesized. Perceived benefit of eating healthy food and perceived barrier for eat healthy food to had significant effects on Behavioral Intentions and was a valid measurement to use to determine Behavioral Intentions. These findings can enhance the extant literature on the universal applicability of the model and serve as useful references for further investigations of the validity of the model within other health care or foodservice settings and for other health behavioral categories. PMID:23346306

  4. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework......This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  5. Applying attachment theory to effective practice with hard-to-reach youth: the AMBIT approach.

    Science.gov (United States)

    Bevington, Dickon; Fuggle, Peter; Fonagy, Peter

    2015-01-01

    Adolescent Mentalization-Based Integrative Treatment (AMBIT) is a developing approach to working with "hard-to-reach" youth burdened with multiple co-occurring morbidities. This article reviews the core features of AMBIT, exploring applications of attachment theory to understand what makes young people "hard to reach," and provide routes toward increased security in their attachment to a worker. Using the theory of the pedagogical stance and epistemic ("pertaining to knowledge") trust, we show how it is the therapeutic worker's accurate mentalizing of the adolescent that creates conditions for new learning, including the establishment of alternative (more secure) internal working models of helping relationships. This justifies an individual keyworker model focused on maintaining a mentalizing stance toward the adolescent, but simultaneously emphasizing the critical need for such keyworkers to remain well connected to their wider team, avoiding activation of their own attachment behaviors. We consider the role of AMBIT in developing a shared team culture (shared experiences, shared language, shared meanings), toward creating systemic contexts supportive of such relationships. We describe how team training may enhance the team's ability to serve as a secure base for keyworkers, and describe an innovative approach to treatment manualization, using a wiki format as one way of supporting this process.

  6. Applying quantitative structure–activity relationship approaches to nanotoxicology: Current status and future potential

    International Nuclear Information System (INIS)

    Winkler, David A.; Mombelli, Enrico; Pietroiusti, Antonio; Tran, Lang; Worth, Andrew; Fadeel, Bengt; McCall, Maxine J.

    2013-01-01

    The potential (eco)toxicological hazard posed by engineered nanoparticles is a major scientific and societal concern since several industrial sectors (e.g. electronics, biomedicine, and cosmetics) are exploiting the innovative properties of nanostructures resulting in their large-scale production. Many consumer products contain nanomaterials and, given their complex life-cycle, it is essential to anticipate their (eco)toxicological properties in a fast and inexpensive way in order to mitigate adverse effects on human health and the environment. In this context, the application of the structure–toxicity paradigm to nanomaterials represents a promising approach. Indeed, according to this paradigm, it is possible to predict toxicological effects induced by chemicals on the basis of their structural similarity with chemicals for which toxicological endpoints have been previously measured. These structure–toxicity relationships can be quantitative or qualitative in nature and they can predict toxicological effects directly from the physicochemical properties of the entities (e.g. nanoparticles) of interest. Therefore, this approach can aid in prioritizing resources in toxicological investigations while reducing the ethical and monetary costs that are related to animal testing. The purpose of this review is to provide a summary of recent key advances in the field of QSAR modelling of nanomaterial toxicity, to identify the major gaps in research required to accelerate the use of quantitative structure–activity relationship (QSAR) methods, and to provide a roadmap for future research needed to achieve QSAR models useful for regulatory purposes

  7. Mathematical Modeling Applied to Prediction of Landslides in Southern Brazil

    Science.gov (United States)

    Silva, Lúcia; Araújo, João; Braga, Beatriz; Fernandes, Nelson

    2013-04-01

    Mass movements are natural phenomena that occur on the slopes and are important agents working in landscape development. These movements have caused serious damage to infrastructure and properties. In addition to the mass movements occurring in natural slopes, there is also a large number of accidents induced by human action in the landscape. The change of use and land cover for the introduction of agriculture is a good example that have affected the stability of slopes. Land use and/or land cover changes have direct and indirect effects on slope stability and frequently represent a major factor controlling the occurrence of man-induced mass movements. In Brazil, especially in the southern and southeastern regions, areas of original natural rain forest have been continuously replaced by agriculture during the last decades, leading to important modifications in soil mechanical properties and to major changes in hillslope hydrology. In these regions, such effects are amplified due to the steep hilly topography, intense summer rainfall events and dense urbanization. In November 2008, a major landslide event took place in a rural area with intensive agriculture in the state of Santa Catarina (Morro do Baú) where many catastrophic landslides were triggered after a long rainy period. In this area, the natural forest has been replaced by huge banana and pine plantations. The state of Santa Catarina in recent decades has been the scene of several incidents of mass movements such as this catastrophic event. In this study, based on field mapping and modeling, we characterize the role played by geomorphological and geological factors in controlling the spatial distribution of landslides in the Morro do Baú area. In order to attain such objective, a digital elevation model of the basin was generated with a 10m grid in which the topographic parameters were obtained. The spatial distribution of the scars from this major event was mapped from another image, obtained immediately

  8. Generalised additive modelling approach to the fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  9. "Let's Move" campaign: applying the extended parallel process model.

    Science.gov (United States)

    Batchelder, Alicia; Matusitz, Jonathan

    2014-01-01

    This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."

  10. On combined gravity gradient components modelling for applied geophysics

    International Nuclear Information System (INIS)

    Veryaskin, Alexey; McRae, Wayne

    2008-01-01

    Gravity gradiometry research and development has intensified in recent years to the extent that technologies providing a resolution of about 1 eotvos per 1 second average shall likely soon be available for multiple critical applications such as natural resources exploration, oil reservoir monitoring and defence establishment. Much of the content of this paper was composed a decade ago, and only minor modifications were required for the conclusions to be just as applicable today. In this paper we demonstrate how gravity gradient data can be modelled, and show some examples of how gravity gradient data can be combined in order to extract valuable information. In particular, this study demonstrates the importance of two gravity gradient components, Txz and Tyz, which, when processed together, can provide more information on subsurface density contrasts than that derived solely from the vertical gravity gradient (Tzz)

  11. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  12. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    Science.gov (United States)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  13. Virtual building environments (VBE) - Applying information modeling to buildings

    Energy Technology Data Exchange (ETDEWEB)

    Bazjanac, Vladimir

    2004-06-21

    A Virtual Building Environment (VBE) is a ''place'' where building industry project staffs can get help in creating Building Information Models (BIM) and in the use of virtual buildings. It consists of a group of industry software that is operated by industry experts who are also experts in the use of that software. The purpose of a VBE is to facilitate expert use of appropriate software applications in conjunction with each other to efficiently support multidisciplinary work. This paper defines BIM and virtual buildings, and describes VBE objectives, set-up and characteristics of operation. It informs about the VBE Initiative and the benefits from a couple of early VBE projects.

  14. Nonspherical Radiation Driven Wind Models Applied to Be Stars

    Science.gov (United States)

    Arauxo, F. X.

    1990-11-01

    ABSTRACT. In this work we present a model for the structure of a radiatively driven wind in the meridional plane of a hot star. Rotation effects and simulation of viscous forces were included in the motion equations. The line radiation force is considered with the inclusion of the finite disk correction in self-consistent computations which also contain gravity darkening as well as distortion of the star by rotation. An application to a typical BlV star leads to mass-flux ratios between equator and pole of the order of 10 and mass loss rates in the range 5.l0 to Mo/yr. Our envelope models are flattened towards the equator and the wind terminal velocities in that region are rather high (1000 Km/s). However, in the region near the star the equatorial velocity field is dominated by rotation. RESUMEN. Se presenta un modelo de la estructura de un viento empujado radiativamente en el plano meridional de una estrella caliente. Se incluyeron en las ecuaciones de movimiento los efectos de rotaci6n y la simulaci6n de fuerzas viscosas. Se consider6 la fuerza de las lineas de radiaci6n incluyendo la correcci6n de disco finito en calculos autoconsistentes los cuales incluyen oscurecimiento gravitacional asi como distorsi6n de la estrella por rotaci6n. La aplicaci6n a una estrella tipica BlV lleva a cocientes de flujo de masa entre el ecuador y el polo del orden de 10 de perdida de masa en el intervalo 5.l0 a 10 Mo/ano. Nuestros modelos de envolvente estan achatados hacia el ecuador y las velocidads terminales del viento en esa regi6n son bastante altas (1000 Km/s). Sin embargo, en la regi6n cercana a la estrella el campo de velocidad ecuatorial esta dominado por la rotaci6n. Key words: STARS-BE -- STARS-WINDS

  15. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  16. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  17. An assessment of econometric models applied to fossil fuel power generation

    International Nuclear Information System (INIS)

    Gracceva, F.; Quercioli, R.

    2001-01-01

    The main purpose of this report is to provide a general view of those studies, in which the econometric approach is applied to the selection of fuel in fossil fired power generation, focusing the attention to the key role played by the fuel prices. The report consists of a methodological analysis and a survey of the studies available in literature. The methodological analysis allows to assess the adequateness of the econometric approach, in the electrical power utilities policy. With this purpose, the fundamentals of microeconomics, which are the basis of the econometric models, are pointed out and discussed, and then the hypotheses, which are needed to be assumed for complying the economic theory, are verified in their actual implementation in the power generation sector. The survey of the available studies provides a detailed description of the Translog and Logit models, and the results achieved with their application. From these results, the estimated models show to fit the data with good approximation, a certain degree of interfuel substitution and a meaningful reaction to prices on demand side [it

  18. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  19. A nonlinear adaptive backstepping approach applied to a three phase PWM AC-DC converter feeding induction heating

    Science.gov (United States)

    Hadri-Hamida, A.; Allag, A.; Hammoudi, M. Y.; Mimoune, S. M.; Zerouali, S.; Ayad, M. Y.; Becherif, M.; Miliani, E.; Miraoui, A.

    2009-04-01

    This paper presents a new control strategy for a three phase PWM converter, which consists of applying an adaptive nonlinear control. The input-output feedback linearization approach is based on the exact cancellation of the nonlinearity, for this reason, this technique is not efficient, because system parameters can vary. First a nonlinear system modelling is derived with state variables of the input current and the output voltage by using power balance of the input and output, the nonlinear adaptive backstepping control can compensate the nonlinearities in the nominal system and the uncertainties. Simulation results are obtained using Matlab/Simulink. These results show how the adaptive backstepping law updates the system parameters and provide an efficient control design both for tracking and regulation in order to improve the power factor.

  20. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  1. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  2. Evaluating treatment process redesign by applying the EFQM Excellence Model.

    Science.gov (United States)

    Nabitz, Udo; Schramade, Mark; Schippers, Gerard

    2006-10-01

    To evaluate a treatment process redesign programme implementing evidence-based treatment as part of a total quality management in a Dutch addiction treatment centre. Quality management was monitored over a period of more than 10 years in an addiction treatment centre with 550 professionals. Changes are evaluated, comparing the scores on the nine criteria of the European Foundation for Quality Management (EFQM) Excellence Model before and after a major redesign of treatment processes and ISO certification. In the course of 10 years, most intake, care, and cure processes were reorganized, the support processes were restructured and ISO certified, 29 evidence-based treatment protocols were developed and implemented, and patient follow-up measuring was established to make clinical outcomes transparent. Comparing the situation before and after the changes shows that the client satisfaction scores are stable, that the evaluation by personnel and society is inconsistent, and that clinical, production, and financial outcomes are positive. The overall EFQM assessment by external assessors in 2004 shows much higher scores on the nine criteria than the assessment in 1994. Evidence-based treatment can successfully be implemented in addiction treatment centres through treatment process redesign as part of a total quality management strategy, but not all results are positive.

  3. Non local theory of excitations applied to the Hubbard model

    International Nuclear Information System (INIS)

    Kakehashi, Y; Nakamura, T; Fulde, P

    2010-01-01

    We propose a nonlocal theory of single-particle excitations. It is based on an off-diagonal effective medium and the projection operator method for treating the retarded Green function. The theory determines the nonlocal effective medium matrix elements by requiring that they are consistent with those of the self-energy of the Green function. This arrows for a description of long-range intersite correlations with high resolution in momentum space. Numerical study for the half-filled Hubbard model on the simple cubic lattice demonstrates that the theory is applicable to the strong correlation regime as well as the intermediate regime of Coulomb interaction strength. Furthermore the results show that nonlocal excitations cause sub-bands in the strong Coulomb interaction regime due to strong antiferromagnetic correlations, decrease the quasi-particle peak on the Fermi level with increasing Coulomb interaction, and shift the critical Coulomb interaction U C2 for the divergence of effective mass towards higher energies at least by a factor of two as compared with that in the single-site approximation.

  4. Applying revised gap analysis model in measuring hotel service quality.

    Science.gov (United States)

    Lee, Yu-Cheng; Wang, Yu-Che; Chien, Chih-Hung; Wu, Chia-Huei; Lu, Shu-Chiung; Tsai, Sang-Bing; Dong, Weiwei

    2016-01-01

    With the number of tourists coming to Taiwan growing by 10-20 % since 2010, the number has increased due to an increasing number of foreign tourists, particularly after deregulation allowed admitting tourist groups, followed later on by foreign individual tourists, from mainland China. The purpose of this study is to propose a revised gap model to evaluate and improve service quality in Taiwanese hotel industry. Thus, service quality could be clearly measured through gap analysis, which was more effective for offering direction in developing and improving service quality. The HOLSERV instrument was used to identify and analyze service gaps from the perceptions of internal and external customers. The sample for this study included three main categories of respondents: tourists, employees, and managers. The results show that five gaps influenced tourists' evaluations of service quality. In particular, the study revealed that Gap 1 (management perceptions vs. customer expectations) and Gap 9 (service provider perceptions of management perceptions vs. service delivery) were more critical than the others in affecting perceived service quality, making service delivery the main area of improvement. This study contributes toward an evaluation of the service quality of the Taiwanese hotel industry from the perspectives of customers, service providers, and managers, which is considerably valuable for hotel managers. It was the aim of this study to explore all of these together in order to better understand the possible gaps in the hotel industry in Taiwan.

  5. Applying Dispersive Changes to Lagrangian Particles in Groundwater Transport Models

    Science.gov (United States)

    Konikow, Leonard F.

    2010-01-01

    Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative. ?? US Government 2010.

  6. An approach to applying quality assurance to nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Cooper, R.B.; Abel, R.

    1996-12-01

    An approach to developing and applying a quality assurance program for a nuclear fuel waste disposal facility is described. The proposed program would be based on N286-series standards used for quality assurance programs in nuclear power plants, and would cover all aspects of work across all stages of the project, from initial feasibility studies to final closure of the vault. A quality assurance manual describing the overall quality assurance program and its elements would be prepared at the outset. Planning requirements of the quality assurance program would be addressed in a comprehensive plan for the project. Like the QA manual, this plan would be prepared at the outset of the project and updated at each stage. Particular attention would be given to incorporating the observational approach in procedures for underground engineering, where the ability to adapt designs and mining techniques to changing ground conditions would be essential. Quality verification requirements would be addressed through design reviews, peer reviews, inspections and surveillance, equipment calibration and laboratory analysis checks, and testing programs. Regular audits and program reviews would help to assess the state of implementation, degree of conformance to standards, and effectiveness of the quality assurance program. Audits would be particularly useful in assessing the quality systems of contractors and suppliers, and in verifying the completion of work at the end of stages. Since a nuclear fuel waste disposal project would span a period of about 90 years, a key function of the quality assurance program would be to ensure the continuity of knowledge and the transfer of experience from one stage to another This would be achieved by maintaining a records management system throughout the life of the project, by ensuring that work procedures were documented and kept current with new technologies and practices, and by instituting training programs that made use of experience gained

  7. Novel approach of fragment-based lead discovery applied to renin inhibitors.

    Science.gov (United States)

    Tawada, Michiko; Suzuki, Shinkichi; Imaeda, Yasuhiro; Oki, Hideyuki; Snell, Gyorgy; Behnke, Craig A; Kondo, Mitsuyo; Tarui, Naoki; Tanaka, Toshimasa; Kuroita, Takanobu; Tomimoto, Masaki

    2016-11-15

    A novel approach was conducted for fragment-based lead discovery and applied to renin inhibitors. The biochemical screening of a fragment library against renin provided the hit fragment which showed a characteristic interaction pattern with the target protein. The hit fragment bound only to the S1, S3, and S3 SP (S3 subpocket) sites without any interactions with the catalytic aspartate residues (Asp32 and Asp215 (pepsin numbering)). Prior to making chemical modifications to the hit fragment, we first identified its essential binding sites by utilizing the hit fragment's substructures. Second, we created a new and smaller scaffold, which better occupied the identified essential S3 and S3 SP sites, by utilizing library synthesis with high-throughput chemistry. We then revisited the S1 site and efficiently explored a good building block attaching to the scaffold with library synthesis. In the library syntheses, the binding modes of each pivotal compound were determined and confirmed by X-ray crystallography and the library was strategically designed by structure-based computational approach not only to obtain a more active compound but also to obtain informative Structure Activity Relationship (SAR). As a result, we obtained a lead compound offering synthetic accessibility as well as the improved in vitro ADMET profiles. The fragments and compounds possessing a characteristic interaction pattern provided new structural insights into renin's active site and the potential to create a new generation of renin inhibitors. In addition, we demonstrated our FBDD strategy integrating highly sensitive biochemical assay, X-ray crystallography, and high-throughput synthesis and in silico library design aimed at fragment morphing at the initial stage was effective to elucidate a pocket profile and a promising lead compound. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. An Effective Risk Minimization Strategy Applied to an Outdoor Music Festival: A Multi-Agency Approach.

    Science.gov (United States)

    Luther, Matt; Gardiner, Fergus; Lenson, Shane; Caldicott, David; Harris, Ryan; Sabet, Ryan; Malloy, Mark; Perkins, Jo

    2018-04-01

    Specific Event Identifiers a. Event type: Outdoor music festival. b. Event onset date: December 3, 2016. c. Location of event: Regatta Point, Commonwealth Park. d. Geographical coordinates: Canberra, Australian Capital Territory (ACT), Australia (-35.289002, 149.131957, 600m). e. Dates and times of observation in latitude, longitude, and elevation: December 3, 2016, 11:00-23:00. f. Response type: Event medical support. Abstract Introduction Young adult patrons are vulnerable to risk-taking behavior, including drug taking, at outdoor music festivals. Therefore, the aim of this field report is to discuss the on-site medical response during a music festival, and subsequently highlight observed strategies aimed at minimizing substance abuse harm. The observed outdoor music festival was held in Canberra (Australian Capital Territory [ACT], Australia) during the early summer of 2016, with an attendance of 23,008 patrons. First aid and on-site medical treatment data were gained from the relevant treatment area and service. The integrated first aid service provided support to 292 patients. Final analysis consisted of 286 patients' records, with 119 (41.6%) males and 167 (58.4%) females. Results from this report indicated that drug intoxication was an observed event issue, with 15 (5.1%) treated on site and 13 emergency department (ED) presentations, primarily related to trauma or medical conditions requiring further diagnostics. This report details an important public health need, which could be met by providing a coordinated approach, including a robust on-site medical service, accepting intrinsic risk-taking behavior. This may include on-site drug-checking, providing reliable information on drug content with associated education. Luther M , Gardiner F , Lenson S , Caldicott D , Harris R , Sabet R , Malloy M , Perkins J . An effective risk minimization strategy applied to an outdoor music festival: a multi-agency approach. Prehosp Disaster Med. 2018;33(2):220-224.

  9. VIPAR, a quantitative approach to 3D histopathology applied to lymphatic malformations.

    Science.gov (United States)

    Hägerling, René; Drees, Dominik; Scherzinger, Aaron; Dierkes, Cathrin; Martin-Almedina, Silvia; Butz, Stefan; Gordon, Kristiana; Schäfers, Michael; Hinrichs, Klaus; Ostergaard, Pia; Vestweber, Dietmar; Goerge, Tobias; Mansour, Sahar; Jiang, Xiaoyi; Mortimer, Peter S; Kiefer, Friedemann

    2017-08-17

    Lack of investigatory and diagnostic tools has been a major contributing factor to the failure to mechanistically understand lymphedema and other lymphatic disorders in order to develop effective drug and surgical therapies. One difficulty has been understanding the true changes in lymph vessel pathology from standard 2D tissue sections. VIPAR (volume information-based histopathological analysis by 3D reconstruction and data extraction), a light-sheet microscopy-based approach for the analysis of tissue biopsies, is based on digital reconstruction and visualization of microscopic image stacks. VIPAR allows semiautomated segmentation of the vasculature and subsequent nonbiased extraction of characteristic vessel shape and connectivity parameters. We applied VIPAR to analyze biopsies from healthy lymphedematous and lymphangiomatous skin. Digital 3D reconstruction provided a directly visually interpretable, comprehensive representation of the lymphatic and blood vessels in the analyzed tissue volumes. The most conspicuous features were disrupted lymphatic vessels in lymphedematous skin and a hyperplasia (4.36-fold lymphatic vessel volume increase) in the lymphangiomatous skin. Both abnormalities were detected by the connectivity analysis based on extracted vessel shape and structure data. The quantitative evaluation of extracted data revealed a significant reduction of lymphatic segment length (51.3% and 54.2%) and straightness (89.2% and 83.7%) for lymphedematous and lymphangiomatous skin, respectively. Blood vessel length was significantly increased in the lymphangiomatous sample (239.3%). VIPAR is a volume-based tissue reconstruction data extraction and analysis approach that successfully distinguished healthy from lymphedematous and lymphangiomatous skin. Its application is not limited to the vascular systems or skin. Max Planck Society, DFG (SFB 656), and Cells-in-Motion Cluster of Excellence EXC 1003.

  10. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    Science.gov (United States)

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  11. An approach to ductile fracture resistance modelling in pipeline steels

    Energy Technology Data Exchange (ETDEWEB)

    Pussegoda, L.N.; Fredj, A. [BMT Fleet Technology Ltd., Kanata (Canada)

    2009-07-01

    Ductile fracture resistance studies of high grade steels in the pipeline industry often included analyses of the crack tip opening angle (CTOA) parameter using 3-point bend steel specimens. The CTOA is a function of specimen ligament size in high grade materials. Other resistance measurements may include steady state fracture propagation energy, critical fracture strain, and the adoption of damage mechanisms. Modelling approaches for crack propagation were discussed in this abstract. Tension tests were used to calibrate damage model parameters. Results from the tests were then applied to the crack propagation in a 3-point bend specimen using modern 1980 vintage steels. Limitations and approaches to overcome the difficulties associated with crack propagation modelling were discussed.

  12. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  13. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  14. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  15. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  16. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  17. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  18. Lactic Acid Bacteria Selection for Biopreservation as a Part of Hurdle Technology Approach Applied on Seafood

    Directory of Open Access Journals (Sweden)

    Norman Wiernasz

    2017-05-01

    Full Text Available As fragile food commodities, microbial, and organoleptic qualities of fishery and seafood can quickly deteriorate. In this context, microbial quality and security improvement during the whole food processing chain (from catch to plate, using hurdle technology, a combination of mild preserving technologies such as biopreservation, modified atmosphere packaging, and superchilling, are of great interest. As natural flora and antimicrobial metabolites producers, lactic acid bacteria (LAB are commonly studied for food biopreservation. Thirty-five LAB known to possess interesting antimicrobial activity were selected for their potential application as bioprotective agents as a part of hurdle technology applied to fishery products. The selection approach was based on seven criteria including antimicrobial activity, alteration potential, tolerance to chitosan coating, and superchilling process, cross inhibition, biogenic amines production (histamine, tyramine, and antibiotics resistance. Antimicrobial activity was assessed against six common spoiling bacteria in fishery products (Shewanella baltica, Photobacterium phosphoreum, Brochothrix thermosphacta, Lactobacillus sakei, Hafnia alvei, Serratia proteamaculans and one pathogenic bacterium (Listeria monocytogenes in co-culture inhibitory assays miniaturized in 96-well microtiter plates. Antimicrobial activity and spoilage evaluation, both performed in cod and salmon juice, highlighted the existence of sensory signatures and inhibition profiles, which seem to be species related. Finally, six LAB with no unusual antibiotics resistance profile nor histamine production ability were selected as bioprotective agents for further in situ inhibitory assays in cod and salmon based products, alone or in combination with other hurdles (chitosan, modified atmosphere packing, and superchilling.

  19. Extraction of thermal Green's function using diffuse fields: a passive approach applied to thermography

    Science.gov (United States)

    Capriotti, Margherita; Sternini, Simone; Lanza di Scalea, Francesco; Mariani, Stefano

    2016-04-01

    In the field of non-destructive evaluation, defect detection and visualization can be performed exploiting different techniques relying either on an active or a passive approach. In the following paper the passive technique is investigated due to its numerous advantages and its application to thermography is explored. In previous works, it has been shown that it is possible to reconstruct the Green's function between any pair of points of a sensing grid by using noise originated from diffuse fields in acoustic environments. The extraction of the Green's function can be achieved by cross-correlating these random recorded waves. Averaging, filtering and length of the measured signals play an important role in this process. This concept is here applied in an NDE perspective utilizing thermal fluctuations present on structural materials. Temperature variations interacting with thermal properties of the specimen allow for the characterization of the material and its health condition. The exploitation of the thermographic image resolution as a dense grid of sensors constitutes the basic idea underlying passive thermography. Particular attention will be placed on the creation of a proper diffuse thermal field, studying the number, placement and excitation signal of heat sources. Results from numerical simulations will be presented to assess the capabilities and performances of the passive thermal technique devoted to defect detection and imaging of structural components.

  20. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    Science.gov (United States)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  1. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts

    Science.gov (United States)

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-01

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  2. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  3. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  4. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  5. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment scale water management

    DEFF Research Database (Denmark)

    Jacosen, T.; Refsgaard, A.; Jacobsen, Brian H.

    Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultu...... in comprehensive, integrated modelling tools.......Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive...... agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied...

  6. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  7. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  8. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  9. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect and design for learning

    Directory of Open Access Journals (Sweden)

    Evangelia Triantafyllou

    2016-05-01

    Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance

  10. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  11. Study of fusion mechanism of halo nuclear 11Be+208Pb by applying QMD model

    International Nuclear Information System (INIS)

    Wang Ning; Li Zhuxia

    2001-01-01

    The authors have studied the fusion reaction for 11 Be + 208 Pb near barrier by applying QMD model, and find that in t he fusion reaction induced by halo nuclei there simultaneously exist two mechanisms competing with each other. On one hand, 11 Be is a weakly bound nuclear system and is easily broke up caused by the interaction with target, when it approaches to target, so the fusion cross section is suppressed. On the other hand, several neutrons of 11 Be transfer into 208 Pb and interact with 208 Pb to cause the local radius of 208 Pb increase and result in an enhancement of fusion cross section. The fusion cross sections calculated show an enhancement near barrier, and the calculated results agree with the experimental data reasonably well

  12. Knowledge Creation and Conversion in Military Organizations: How the SECI Model is Applied Within Armed Forces

    Directory of Open Access Journals (Sweden)

    Andrzej Lis

    2014-01-01

    Full Text Available The aim of the paper is to analyze the knowledge creation and conversion processes in military organizations using the SECI model as a framework. First of all, knowledge creation activities in military organizations are identified and categorized. Then, knowledge socialization, externalization, combination and internalization processes are analyzed. The paper studies methods, techniques and tools applied by NATO and the U.S. Army to support the aforementioned processes. As regards the issue of knowledge socialization, counseling, coaching, mentoring and communities of practice are discussed. Lessons Learned systems and After Action Reviews illustrate the military approaches to knowledge externalization. Producing doctrines in the process of operational standardization is presented as a solution used by the military to combine knowledge in order to codify it. Finally, knowledge internalization through training and education is explored.

  13. Functional analysis and applied optimization in Banach spaces applications to non-convex variational models

    CERN Document Server

    Botelho, Fabio

    2014-01-01

    This book introduces the basic concepts of real and functional analysis. It presents the fundamentals of the calculus of variations, convex analysis, duality, and optimization that are necessary to develop applications to physics and engineering problems. The book includes introductory and advanced concepts in measure and integration, as well as an introduction to Sobolev spaces. The problems presented are nonlinear, with non-convex variational formulation. Notably, the primal global minima may not be attained in some situations, in which cases the solution of the dual problem corresponds to an appropriate weak cluster point of minimizing sequences for the primal one. Indeed, the dual approach more readily facilitates numerical computations for some of the selected models. While intended primarily for applied mathematicians, the text will also be of interest to engineers, physicists, and other researchers in related fields.

  14. Applying an orographic precipitation model to improve mass balance modeling of the Juneau Icefield, AK

    Science.gov (United States)

    Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.

    2017-12-01

    Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with

  15. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    Science.gov (United States)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  16. Validation of radiological efficiency model applied for the crops/soils contaminated by radiocaesium

    International Nuclear Information System (INIS)

    Montero, M.; Vazquez, C.; Moraleda, M.; Claver, F.

    2000-01-01

    The differences shown in the radiological efficiency applying the same agrochemical interventions on a range of contaminated agricultural scenarios by long-live radionuclides have conducted the radioecological studies to quantify the influence of local characteristics on the soil-to-plant transference. In the framework of the Decision Support Systems for post-accidental environmental restoration, a semi-mechanistic approach has been developed to estimate the soil-to-plant transfer factor from the major properties underlying the bioavailability of radiocaesium in soils and the absorption capacity by the crop. The model describes, for each soil texture class, the effects of time and K status on the transference of radiocaesium to plants. The approach lets to estimate the actual and the available minimum transference and to calculate the optimum amendment warranting the maximum radiological efficiency on an specific soil-crop combination. The parameterization and validation of the model from a database providing information about experimental transference studies for a collection of soil-crop combinations are shown. (Author) 4 refs

  17. A new approach for modeling composite materials

    Science.gov (United States)

    Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.

    2013-03-01

    The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.

  18. Power to the People! Meta-algorithmic modelling in applied data science

    NARCIS (Netherlands)

    Spruit, M.; Jagesar, R.

    2016-01-01

    This position paper first defines the research field of applied data science at the intersection of domain expertise, data mining, and engineering capabilities, with particular attention to analytical applications. We then propose a meta-algorithmic approach for applied data science with societal

  19. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    Science.gov (United States)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  20. APPLYING PROFESSIONALLY ORIENTED PROBLEMS OF MATHEMATICAL MODELING IN TEACHING STUDENTS OF ENGINEERING DEPARTMENTS

    Directory of Open Access Journals (Sweden)

    Natal’ya Yur’evna Gorbunova

    2017-06-01

    Full Text Available We described several aspects of organizing student research work, as well as solving a number of mathematical modeling problems: professionally-oriented, multi-stage, etc. We underlined the importance of their economic content. Samples of using such problems in teaching Mathematics at agricultural university were given. Several questions connected with information material selection and peculiarities of research problems application were described. Purpose. The author aims to show the possibility and necessity of using professionally-oriented problems of mathematical modeling in teaching Mathematics at agricultural university. The subject of analysis is including such problems into educational process. Methodology. The main research method is dialectical method of obtaining knowledge of finding approaches to selection, writing and using mathematical modeling and professionally-oriented problems in educational process; the methodology is study of these methods of obtaining knowledge. Results. As a result of analysis of literature, students opinions, observation of students work, and taking into account personal teaching experience, it is possible to make conclusion about importance of using mathematical modeling problems, as it helps to systemize theoretical knowledge, apply it to practice, raise students study motivation in engineering sphere. Practical implications. Results of the research can be of interest for teachers of Mathematics in preparing Bachelor and Master students of engineering departments of agricultural university both for theoretical research and for modernization of study courses.

  1. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  2. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  3. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  4. Remote sensing approach to structural modelling

    International Nuclear Information System (INIS)

    El Ghawaby, M.A.

    1989-01-01

    Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs

  5. Model Proposition for the Fiscal Policies Analysis Applied in Economic Field

    Directory of Open Access Journals (Sweden)

    Larisa Preda

    2007-05-01

    Full Text Available This paper presents a study about fiscal policy applied in economic development. Correlations between macroeconomics and fiscal indicators signify the first steep in our analysis. Next step is a new model proposal for the fiscal and budgetary choices. This model is applied on the date of the Romanian case.

  6. Hybrid sequencing approach applied to human fecal metagenomic clone libraries revealed clones with potential biotechnological applications.

    Science.gov (United States)

    Džunková, Mária; D'Auria, Giuseppe; Pérez-Villarroya, David; Moya, Andrés

    2012-01-01

    Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be "domesticated" for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7-15 kb) cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs) with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts.

  7. Hybrid sequencing approach applied to human fecal metagenomic clone libraries revealed clones with potential biotechnological applications.

    Directory of Open Access Journals (Sweden)

    Mária Džunková

    Full Text Available Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be "domesticated" for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7-15 kb cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts.

  8. Features of applying systems approach for evaluating the reliability of cryogenic systems for special purposes

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. The analysis of cryogenic installations confirms objective regularity of increase in amount of the tasks solved by systems of a special purpose. One of the most important directions of development of a cryogenics is creation of installations for air separation product receipt, namely oxygen and nitrogen. Modern aviation complexes require use of these gases in large numbers as in gaseous, and in the liquid state. The onboard gas systems applied in aircraft of the Russian Federation are subdivided on: oxygen system; air (nitric system; system of neutral gas; fire-proof system. Technological schemes ADI are in many respects determined by pressure of compressed air or, in a general sense, a refrigerating cycle. For the majority ADI a working body of a refrigerating cycle the divided air is, that is technological and refrigerating cycles in installation are integrated. By this principle differentiate installations: low pressure; average and high pressure; with detander; with preliminary chilling. There is also insignificant number of the ADI types in which refrigerating and technological cycles are separated. These are installations with external chilling. For the solution of tasks of control of technical condition of the BRV hardware in real time and estimates of indicators of reliability it is offered to use multi-agent technologies. Multi-agent approach is the most acceptable for creation of SPPR for reliability assessment as allows: to redistribute processing of information on elements of system that leads to increase in overall performance; to solve a problem of accumulating, storage and recycling of knowledge that will allow to increase significantly efficiency of the solution of tasks of an assessment of reliability; to considerably reduce intervention of the person in process of functioning of system that will save time of the person of the making decision (PMD and will not demand from it special skills of work with it.

  9. Comparison of the Modeling Approach between Membrane Bioreactor and Conventional Activated Sludge Processes

    DEFF Research Database (Denmark)

    Jiang, Tao; Sin, Gürkan; Spanjers, Henri

    2009-01-01

    Activated sludge models (ASM) have been developed and largely applied in conventional activated sludge (CAS) systems. The applicability of ASM to model membrane bioreactors (MBR) and the differences in modeling approaches have not been studied in detail. A laboratory-scale MBR was modeled using ASM...

  10. On quantum approach to modeling of plasmon photovoltaic effect

    DEFF Research Database (Denmark)

    Kluczyk, Katarzyna; David, Christin; Jacak, Witold Aleksander

    2017-01-01

    Surface plasmons in metallic nanostructures including metallically nanomodified solar cells are conventionally studied and modeled by application of the Mie approach to plasmons or by the finite element solution of differential Maxwell equations with imposed boundary and material constraints (e...... to the semiconductor solar cell mediated by surface plasmons in metallic nanoparticles deposited on the top of the battery. In addition, short-ranged electron-electron interaction in metals is discussed in the framework of the semiclassical hydrodynamic model. The significance of the related quantum corrections......-aided photovoltaic phenomena. Quantum corrections considerably improve both the Mie and COMSOL approaches in this case. We present the semiclassical random phase approximation description of plasmons in metallic nanoparticles and apply the quantumFermi golden rule scheme to assess the sunlight energy transfer...

  11. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    Science.gov (United States)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete

  12. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  13. Modeling and numerical simulation of the dynamics of nanoparticles applied to free and confined atmospheres

    International Nuclear Information System (INIS)

    Devilliers, Marion

    2012-01-01

    It is necessary to adapt existing models in order to simulate the number concentration, and correctly account for nanoparticles, in both free and confined atmospheres. A model of particle dynamics capable of following accurately the number as well as the mass concentration of particles, with an optimal calculation time, has been developed. The dynamics of particles depends on various processes, the most important ones being condensation/evaporation, followed by nucleation, coagulation, and deposition phenomena. These processes are well-known for fine and coarse particles, but some additional phenomena must be taken into account when applied to nanoparticles, such as the Kelvin effect for condensation/evaporation and the van der Waals forces for coagulation. This work focused first on condensation/evaporation, which is the most numerically challenging process. Particles were assumed to be of spherical shape. The Kelvin effect has been taken into account as it becomes significant for particles with diameter below 50 nm. The numerical schemes are based on a sectional approach: the particle size range is discretized in sections characterized by a representative diameter. A redistribution algorithm is used, after condensation/ evaporation occurred, in order to keep the representative diameter between the boundaries of the section. The redistribution can be conducted in terms of mass or number. The key point in such algorithms is to choose which quantity has to be redistributed over the fixed sections. We have developed a hybrid algorithm that redistributes the relevant quantity for each section. This new approach has been tested and shows significant improvements with respect to most existing models over a wide range of conditions. The process of coagulation for nanoparticles has also been solved with a sectional approach. Coagulation is monitored by the Brownian motion of nanoparticles. This approach is shown to be more efficient if the coagulation rate is evaluated

  14. Applying the competence-based approach to management in the aerospace industry

    OpenAIRE

    Arpentieva Mariam; Duvalina Olga; Braitseva Svetlana; Gorelova Irina; Rozhnova Anna

    2018-01-01

    Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource ...

  15. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  16. Nuclear security assessment with Markov model approach

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Terao, Norichika

    2013-01-01

    Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)

  17. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  18. Applying an Archetype-Based Approach to Electroencephalography/Event-Related Potential Experiments in the EEGBase Resource.

    Science.gov (United States)

    Papež, Václav; Mouček, Roman

    2017-01-01

    The purpose of this study is to investigate the feasibility of applying openEHR (an archetype-based approach for electronic health records representation) to modeling data stored in EEGBase, a portal for experimental electroencephalography/event-related potential (EEG/ERP) data management. The study evaluates re-usage of existing openEHR archetypes and proposes a set of new archetypes together with the openEHR templates covering the domain. The main goals of the study are to (i) link existing EEGBase data/metadata and openEHR archetype structures and (ii) propose a new openEHR archetype set describing the EEG/ERP domain since this set of archetypes currently does not exist in public repositories. The main methodology is based on the determination of the concepts obtained from EEGBase experimental data and metadata that are expressible structurally by the openEHR reference model and semantically by openEHR archetypes. In addition, templates as the third openEHR resource allow us to define constraints over archetypes. Clinical Knowledge Manager (CKM), a public openEHR archetype repository, was searched for the archetypes matching the determined concepts. According to the search results, the archetypes already existing in CKM were applied and the archetypes not existing in the CKM were newly developed. openEHR archetypes support linkage to external terminologies. To increase semantic interoperability of the new archetypes, binding with the existing odML electrophysiological terminology was assured. Further, to increase structural interoperability, also other current solutions besides EEGBase were considered during the development phase. Finally, a set of templates using the selected archetypes was created to meet EEGBase requirements. A set of eleven archetypes that encompassed the domain of experimental EEG/ERP measurements were identified. Of these, six were reused without changes, one was extended, and four were newly created. All archetypes were arranged in the

  19. Applying Catastrophe Theory to an Information-Processing Model of Problem Solving in Science Education

    Science.gov (United States)

    Stamovlasis, Dimitrios; Tsaparlis, Georgios

    2012-01-01

    In this study, we test an information-processing model (IPM) of problem solving in science education, namely the working memory overload model, by applying catastrophe theory. Changes in students' achievement were modeled as discontinuities within a cusp catastrophe model, where working memory capacity was implemented as asymmetry and the degree…

  20. A comparison of scope for growth (SFG) and dynamic energy budget (DEB) models applied to the blue mussel ( Mytilus edulis)

    Science.gov (United States)

    Filgueira, Ramón; Rosland, Rune; Grant, Jon

    2011-11-01

    Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.

  1. Regional LLRW processing alternatives applying the DOE REGINALT systems analysis model

    International Nuclear Information System (INIS)

    Beers, G.H.

    1987-01-01

    The DOE Low-Level Waste Management Program has developed a computer-based decision support system of models that may be used by nonprogrammers to evaluate a comprehensive approach to commercial low-level radioactive waste (LLRW) management. REGINALT (Regional Waste Management Alternatives Analysis Model) implementation will be described as the model is applied to hypothetical regional compact for the purpose of examining the technical and economic potential of two waste processing alternatives. Using waste from a typical regional compact, two specific regional waste processing centers are compared for feasibility. Example 1 assumes that a regional supercompaction facility is being developed for the region. Example 2 assumes that a regional facility with both supercompaction and incineration is specified. Both examples include identical disposal facilities, except that capacity may differ due to variation in volume reduction achieved. The two examples are compared with regard to volume reduction achieved, estimated occupational exposure for the processing facilities, and life cycle costs per generated unit waste. A base case also illustrates current disposal practices. The results of the comparisons evaluated, and other steps, if necessary, for additional decision support are identified

  2. Safety constraints applied to an adaptive Bayesian condition-based maintenance optimization model

    International Nuclear Information System (INIS)

    Flage, Roger; Coit, David W.; Luxhøj, James T.; Aven, Terje

    2012-01-01

    A model is described that determines an optimal inspection and maintenance scheme for a deteriorating unit with a stochastic degradation process with independent and stationary increments and for which the parameters are uncertain. This model and resulting maintenance plans offers some distinct benefits compared to prior research because the uncertainty of the degradation process is accommodated by a Bayesian approach and two new safety constraints have been applied to the problem: (1) with a given subjective probability (degree of belief), the limiting relative frequency of one or more failures during a fixed time interval is bounded; or (2) the subjective probability of one or more failures during a fixed time interval is bounded. In the model, the parameter(s) of a condition-based inspection scheduling function and a preventive replacement threshold are jointly optimized upon each replacement and inspection such as to minimize the expected long run cost per unit of time, but also considering one of the specified safety constraints. A numerical example is included to illustrate the effect of imposing each of the two different safety constraints.

  3. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  4. Applying Data Envelopment Analysis and Grey Model for the Productivity Evaluation of Vietnamese Agroforestry Industry

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2016-11-01

    Full Text Available Agriculture and forestry play important roles in Vietnam, particularly as they contribute to the creation of food, conservation of forest resources, and improvement of soil fertility. Therefore, understanding the performances of relevant enterprises in this field contributes to the sustainable development of this country’s agroforestry industry. This research proposes a hybrid model, which includes a grey model (GM and a Malmquist productivity index (MPI, to assess the performances of Vietnamese agroforestry enterprises over several time periods. After collecting the data of selected input and output variables for 10 Vietnam agroforestry enterprises in the period of 2011–2014, GM is used to forecast the future values of these input and output variables for the 10 agroforestry enterprises in 2015 and 2016. Following the results of GM, the MPI is used to measure the performance of these enterprises. The MPI scores showed some enterprises will become more efficient, while others will become less efficient. The proposed model gives past–present–future insights in order for decision-makers to sustain agroforestry development in Vietnam. This hybrid approach can be applied to performance analysis of other industries as well.

  5. Regional LLRW [low-level radioactive waste] processing alternatives applying the DOE REGINALT systems analysis model

    International Nuclear Information System (INIS)

    Beers, G.H.

    1987-01-01

    The DOE Low-Level Waste Management Progam has developed a computer-based decision support system of models that may be used by nonprogrammers to evaluate a comprehensive approach to commercial low-level radioactive waste (LLRW) management. REGINALT (Regional Waste Management Alternatives Analysis Model) implementation will be described as the model is applied to a hypothetical regional compact for the purpose of examining the technical and economic potential of two waste processing alternaties. Using waste from a typical regional compact, two specific regional waste processing centers will be compared for feasibility. Example 1 will assume will assume that a regional supercompaction facility is being developed for the region. Example 2 will assume that a regional facility with both supercompation and incineration is specified. Both examples will include identical disposal facilities, except that capacity may differ due to variation in volume reduction achieved. The two examples will be compared with regard to volume reduction achieved, estimated occupational exposure for the processing facilities, and life cylcle costs per generated unit waste. A base case will also illustrate current disposal practices. The results of the comparisons will be evaluated, and other steps, if necessary, for additional decision support will be identified

  6. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jin Soo; Heo, Gyun Young [Kyung Hee University, Youngin (Korea, Republic of); Kang, Hyun Gook [KAIST, Dajeon (Korea, Republic of); Son, Han Seong [Joongbu University, Chubu (Korea, Republic of)

    2014-08-15

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility.

  7. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Heo, Gyun Young; Kang, Hyun Gook; Son, Han Seong

    2014-01-01

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility

  8. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    International Nuclear Information System (INIS)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik

    2006-01-01

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues that are

  9. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    Energy Technology Data Exchange (ETDEWEB)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik [SCKCEN, Mol (Belgium). PISA

    2006-09-15

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues

  10. Quantum correlated cluster mean-field theory applied to the transverse Ising model.

    Science.gov (United States)

    Zimmer, F M; Schmidt, M; Maziero, Jonas

    2016-06-01

    Mean-field theory (MFT) is one of the main available tools for analytical calculations entailed in investigations regarding many-body systems. Recently, there has been a surge of interest in ameliorating this kind of method, mainly with the aim of incorporating geometric and correlation properties of these systems. The correlated cluster MFT (CCMFT) is an improvement that succeeded quite well in doing that for classical spin systems. Nevertheless, even the CCMFT presents some deficiencies when applied to quantum systems. In this article, we address this issue by proposing the quantum CCMFT (QCCMFT), which, in contrast to its former approach, uses general quantum states in its self-consistent mean-field equations. We apply the introduced QCCMFT to the transverse Ising model in honeycomb, square, and simple cubic lattices and obtain fairly good results both for the Curie temperature of thermal phase transition and for the critical field of quantum phase transition. Actually, our results match those obtained via exact solutions, series expansions or Monte Carlo simulations.

  11. Probabilistic approaches applied to damage and embrittlement of structural materials in nuclear power plants

    International Nuclear Information System (INIS)

    Vincent, L.

    2012-01-01

    The present study deals with the long-term mechanical behaviour and damage of structural materials in nuclear power plants. An experimental way is first followed to study the thermal fatigue of austenitic stainless steels with a focus on the effects of mean stress and bi-axiality. Furthermore, the measurement of displacement fields by Digital Image Correlation techniques has been successfully used to detect early crack initiation during high cycle fatigue tests. A probabilistic model based on the shielding zones surrounding existing cracks is proposed to describe the development of crack networks. A more numeric way is then followed to study the embrittlement consequences of the irradiation hardening of the bainitic steel constitutive of nuclear pressure vessels. A crystalline plasticity law, developed in agreement with lower scale results (Dislocation Dynamics), is introduced in a Finite Element code in order to run simulations on aggregates and obtain the distributions of the maximum principal stress inside a Representative Volume Element. These distributions are then used to improve the classical Local Approach to Fracture which estimates the probability for a microstructural defect to be loaded up to a critical level. (author) [fr

  12. An applied artificial intelligence approach towards assessing building performance simulation tools

    Energy Technology Data Exchange (ETDEWEB)

    Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)

    2008-07-01

    With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)

  13. Applying the methodology of Design of Experiments to stability studies: a Partial Least Squares approach for evaluation of drug stability.

    Science.gov (United States)

    Jordan, Nika; Zakrajšek, Jure; Bohanec, Simona; Roškar, Robert; Grabnar, Iztok

    2018-05-01

    The aim of the present research is to show that the methodology of Design of Experiments can be applied to stability data evaluation, as they can be seen as multi-factor and multi-level experimental designs. Linear regression analysis is usually an approach for analyzing stability data, but multivariate statistical methods could also be used to assess drug stability during the development phase. Data from a stability study for a pharmaceutical product with hydrochlorothiazide (HCTZ) as an unstable drug substance was used as a case example in this paper. The design space of the stability study was modeled using Umetrics MODDE 10.1 software. We showed that a Partial Least Squares model could be used for a multi-dimensional presentation of all data generated in a stability study and for determination of the relationship among factors that influence drug stability. It might also be used for stability predictions and potentially for the optimization of the extent of stability testing needed to determine shelf life and storage conditions, which would be time and cost-effective for the pharmaceutical industry.

  14. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  15. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  16. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  17. Practical Findings from Applying the PSD Model for Evaluating Software Design Specifications

    Science.gov (United States)

    Räisänen, Teppo; Lehto, Tuomas; Oinas-Kukkonen, Harri

    This paper presents practical findings from applying the PSD model to evaluating the support for persuasive features in software design specifications for a mobile Internet device. On the one hand, our experiences suggest that the PSD model fits relatively well for evaluating design specifications. On the other hand, the model would benefit from more specific heuristics for evaluating each technique to avoid unnecessary subjectivity. Better distinction between the design principles in the social support category would also make the model easier to use. Practitioners who have no theoretical background can apply the PSD model to increase the persuasiveness of the systems they design. The greatest benefit of the PSD model for researchers designing new systems may be achieved when it is applied together with a sound theory, such as the Elaboration Likelihood Model. Using the ELM together with the PSD model, one may increase the chances for attitude change.

  18. Convex models and probabilistic approach of nonlinear fatigue failure

    International Nuclear Information System (INIS)

    Qiu Zhiping; Lin Qiang; Wang Xiaojun

    2008-01-01

    This paper is concerned with the nonlinear fatigue failure problem with uncertainties in the structural systems. In the present study, in order to solve the nonlinear problem by convex models, the theory of ellipsoidal algebra with the help of the thought of interval analysis is applied. In terms of the inclusion monotonic property of ellipsoidal functions, the nonlinear fatigue failure problem with uncertainties can be solved. A numerical example of 25-bar truss structures is given to illustrate the efficiency of the presented method in comparison with the probabilistic approach

  19. Algebraic approach to small-world network models

    Science.gov (United States)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  20. Modelling and simulating retail management practices: a first approach

    OpenAIRE

    Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris

    2010-01-01

    Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...

  1. A coordination chemistry approach for modeling trace element adsorption

    International Nuclear Information System (INIS)

    Bourg, A.C.M.

    1986-01-01

    The traditional distribution coefficient, Kd, is highly dependent on the water chemistry and the surface properties of the geological system being studied and is therefore quite inappropriate for use in predictive models. Adsorption, one of the many processes included in Kd values, is described here using a coordination chemistry approach. The concept of adsorption of cationic trace elements by solid hydrous oxides can be applied to natural solids. The adsorption process is thus understood in terms of a classical complexation leading to the formation of surface (heterogeneous) ligands. Applications of this concept to some freshwater, estuarine and marine environments are discussed. (author)

  2. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  3. A comparison of economic evaluation models as applied to geothermal energy technology

    Science.gov (United States)

    Ziman, G. M.; Rosenberg, L. S.

    1983-01-01

    Several cost estimation and financial cash flow models have been applied to a series of geothermal case studies. In order to draw conclusions about relative performance and applicability of these models to geothermal projects, the consistency of results was assessed. The model outputs of principal interest in this study were net present value, internal rate of return, or levelized breakeven price. The models used were VENVAL, a venture analysis model; the Geothermal Probabilistic Cost Model (GPC Model); the Alternative Power Systems Economic Analysis Model (APSEAM); the Geothermal Loan Guarantee Cash Flow Model (GCFM); and the GEOCOST and GEOCITY geothermal models. The case studies to which the models were applied include a geothermal reservoir at Heber, CA; a geothermal eletric power plant to be located at the Heber site; an alcohol fuels production facility to be built at Raft River, ID; and a direct-use, district heating system in Susanville, CA.

  4. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  5. Applying the competence-based approach to management in the aerospace industry

    Directory of Open Access Journals (Sweden)

    Arpentieva Mariam

    2018-01-01

    Full Text Available Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource management in context strategic management of the aerospace organization. To achieve this goal it is possible to obtain the method of comparative analysis. The article compares two approaches to personnel management. The transition to competence-based human resource management means (a a different understanding of the object of management; (b involvement in all functions of human resource management «knowledge – skills – abilities» of the employee; (c to change the approach to strategic management aerospace industry.

  6. A model to Estimate the Implicit Values of Housing Attributes by Applying the Hedonic Pricing Method

    Directory of Open Access Journals (Sweden)

    TD Randeniya

    2017-05-01

    Full Text Available Many scholars focused on the location based attributes rather than the non-location factors in decision making on land prices. Further, new research studies have identified the importance of the non-location attributes with the location factors. Many studies suggest that, many attributes exist which affects the housing price. Since the attributes involved and dominant for a particular case differs from one situation to the other, there cannot be an exact list of attributes. Yet, identification of factors that determine housing price and their relationships and the level of influence have poorly understood in planning and property development in the context of Sri Lanka. This study attempts to address what make householders to decide on housing price and application of hedonic pricing approach to estimate the implicit price of housing attributes in context of Sri Lanka. A sample study of selected fifty (50 single house transactions in Maharagama urban neighborhood area has been utilized to illustrate the applicability of the hedonic pricing model. As a methodology, correlation analysis has been carried out to study the degree of relationship between the housing price and the independent variables. The attributes which correlate with housing prices, the study identified the most significant attributes. A model was developed to estimate the future house price by applying the pricing model which is incorporated with these attributes. A hedonic house price model derived from multiple liner regression analysis was developed for the purpose. The findings reveal that six attributes as design type of the house, distance to the local road, quality of Infrastructure, garden size, number of the bed rooms and property age are contributed to estimate the implicit value of Housing property. The model developed would be used to identify implicit values of houses located in urban neighborhood area of Sri Lanka.

  7. Surface complexation modelling applied to the sorption of nickel on silica

    International Nuclear Information System (INIS)

    Olin, M.

    1995-10-01

    The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)

  8. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  9. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  10. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  11. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  12. School Food Environment Promotion Program: Applying the Socio-ecological Approach

    Directory of Open Access Journals (Sweden)

    Fatemeh Bakhtari Aghdam

    2018-01-01

    Full Text Available Background Despite of healthy nutrition recommendations have been offered in recent decades, researches show an increasing rate of unhealthy junk food consumption among primary school children. The aim of this study was to investigate the effects of health promotion intervention on the school food buffets and the changes in nutritional behaviors of the students. Materials and Methods In this Quasi-interventional study, eight schools agreed to participate in Tabriz city, Iran. The schools were randomly selected and divided into an intervention and a control group, and a pretest was given to both groups. A four weeks interventional program was conducted in eight randomly selected schools of the city based on the socio-ecological model. A check list was designed for the assessment of food items available at the schools’ buffets, a 60-item semi-quantitative food frequency questionnaire (FFQ was used to assess the rate of food consumption and energy intake. Results evaluation and practice were analyzed using the Wilcoxon, Mann Whitney-U and Chi-square tests. Results The findings revealed reduction in the intervention group between before and after intervention with regard the range of junk food consumption, except for the sweets consumption. The number of junk foods provided in the schools buffets reduced in the intervention group. After the intervention on the intervention group significant decreases were found in the intake of energy, fat and saturated fatty acids compared to the control group (p = 0.00.   Conclusion In order to design effective school food environment promotion programs, school healthcare providers should consider multifaceted approaches.

  13. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  14. Uncertainty modelling and structured singular value computation applied to an electro-mechanical system

    NARCIS (Netherlands)

    Steinbuch, M.; Terlouw, J.C.; Bosgra, O.H.; Smit, S.G.

    1992-01-01

    The investigation of closed-loop systems subject to model perturbations is an important issue to assure stability robustness of a control design. A large variety of model perturbations can be described by norm-bounded uncertainty models. A general approach for modelling structured complex and

  15. Frontolateral Approach Applied to Sellar Region Lesions: A Retrospective Study in 79 Patients

    Directory of Open Access Journals (Sweden)

    Hao-Cheng Liu

    2016-01-01

    Conclusions: FLA was an effective approach in the treatment of sellar region lesions with good preservation of visual function. FLA classification enabled tailored craniotomies for each patient according to the anatomic site of tumor invasion. This study found that FLA had similar outcomes to other surgical approaches of sellar region lesions.

  16. An Optimisation Approach Applied to Design the Hydraulic Power Supply for a Forklift Truck

    DEFF Research Database (Denmark)

    Pedersen, Henrik Clemmensen; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    -level optimisation approach, and is in the current paper exemplified through the design of the hydraulic power supply for a forklift truck. The paper first describes the prerequisites for the method and then explains the different steps in the approach to design the hydraulic system. Finally the results...

  17. Carbonate rock depositional models: A microfacies approach

    Energy Technology Data Exchange (ETDEWEB)

    Carozzi, A.V.

    1988-01-01

    Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.

  18. Applying Interpretive Structural Modeling to Cost Overruns in Construction Projects in the Sultanate of Oman

    Directory of Open Access Journals (Sweden)

    K. Alzebdeh

    2015-06-01

    Full Text Available Cost overruns in construction projects are a problem faced by project managers, engineers, and clients throughout the Middle East.  Globally, several studies in the literature have focused on identifying the causes of these overruns and used statistical methods to rank them according to their impacts. None of these studies have considered the interactions among these factors. This paper examines interpretive structural modelling (ISM as a viable technique for modelling complex interactions among factors responsible for cost overruns in construction projects in the Sultanate of Oman. In particular, thirteen interrelated factors associated with cost overruns were identified, along with their contextual interrelationships. Application of ISM leads to organizing these factors in a hierarchical structure which effectively demonstrates their interactions in a simple way. Four factors were found to be at the root of cost overruns: instability of the US dollar, changes in governmental regulations, faulty cost estimation, and poor coordination among projects’ parties. Taking appropriate actions to minimize the influence of these factors can ultimately lead to better control of future project costs. Thisstudy is of value to managers and decision makers because it provides a powerful yet very easy to apply approach for investigating the problem of cost overruns and other similar issues.

  19. THE 3C COOPERATION MODEL APPLIED TO THE CLASSICAL REQUIREMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vagner Luiz Gava

    2012-08-01

    Full Text Available Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users’ workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system

  20. "Teamwork in hospitals": a quasi-experimental study protocol applying a human factors approach.

    Science.gov (United States)

    Ballangrud, Randi; Husebø, Sissel Eikeland; Aase, Karina; Aaberg, Oddveig Reiersdal; Vifladt, Anne; Berg, Geir Vegard; Hall-Lord, Marie Louise

    2017-01-01

    Effective teamwork and sufficient communication are critical components essential to patient safety in today's specialized and complex healthcare services. Team training is important for an improved efficiency in inter-professional teamwork within hospitals, however the scientific rigor of studies must be strengthen and more research is required to compare studies across samples, settings and countries. The aims of the study are to translate and validate teamwork questionnaires and investigate healthcare personnel's perception of teamwork in hospitals (Part 1). Further to explore the impact of an inter-professional teamwork intervention in a surgical ward on structure, process and outcome (Part 2). To address the aims, a descriptive, and explorative design (Part 1), and a quasi-experimental interventional design will be applied (Part 2). The study will be carried out in five different hospitals (A-E) in three hospital trusts in Norway. Frontline healthcare personnel in Hospitals A and B, from both acute and non-acute departments, will be invited to respond to three Norwegian translated teamwork questionnaires (Part 1). An inter-professional teamwork intervention in line with the TeamSTEPPS recommend Model of Change will be implemented in a surgical ward at Hospital C. All physicians, registered nurses and assistant nurses in the intervention ward and two control wards (Hospitals D and E) will be invited to to survey their perception of teamwork, team decision making, safety culture and attitude towards teamwork before intervention and after six and 12 months. Adult patients admitted to the intervention surgical unit will be invited to survey their perception of quality of care during their hospital stay before intervention and after six and 12 month. Moreover, anonymous patient registry data from local registers and data from patients' medical records will be collected (Part 2). This study will help to understand the impact of an inter-professional teamwork