WorldWideScience

Sample records for modelling approach applied

  1. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  2. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  3. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  4. Comparison of various modelling approaches applied to cholera case data

    CSIR Research Space (South Africa)

    Van Den Bergh, F

    2008-06-01

    Full Text Available cross-wavelet technique, which is used to compute lead times for co-varying variables, and suggests transformations that enhance co-varying behaviour. Several statistical modelling techniques, including generalised linear models, ARIMA time series...

  5. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  6. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  7. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    The analyses are demonstrated on data collected from Beira, Mozambique. Dynamic regression was found to be the preferred forecasting method for this data set. Keywords:Cholera, modelling, signal processing, dynamic regression, negative binomial regression, wavelet analysis, cross-wavelet analysis. ORiON Vol.

  8. An Analytical Model for Learning: An Applied Approach.

    Science.gov (United States)

    Kassebaum, Peter Arthur

    A mediated-learning package, geared toward non-traditional students, was developed for use in the College of Marin's cultural anthropology courses. An analytical model for learning was used in the development of the package, utilizing concepts related to learning objectives, programmed instruction, Gestalt psychology, cognitive psychology, and…

  9. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    Abstract. The application of a methodology that proposes the use of spectral methods to inform the development of statistical forecasting models for cholera case data is explored in this pa- per. The seasonal behaviour of the target variable (cholera cases) is analysed using singular spectrum analysis followed by spectrum ...

  10. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Batstone, Damien J.; Flores Alsina, Xavier

    2015-01-01

    , the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define...

  11. Blended Risk Approach in Applying PSA Models to Risk-Based Regulations

    International Nuclear Information System (INIS)

    Dimitrijevic, V. B.; Chapman, J. R.

    1996-01-01

    In this paper, the authors will discuss a modern approach in applying PSA models in risk-based regulation. The Blended Risk Approach is a combination of traditional and probabilistic processes. It is receiving increased attention in different industries in the U. S. and abroad. The use of the deterministic regulations and standards provides a proven and well understood basis on which to assess and communicate the impact of change to plant design and operation. Incorporation of traditional values into risk evaluation is working very well in the blended approach. This approach is very application specific. It includes multiple risk attributes, qualitative risk analysis, and basic deterministic principles. In blending deterministic and probabilistic principles, this approach ensures that the objectives of the traditional defense-in-depth concept are not compromised and the design basis of the plant is explicitly considered. (author)

  12. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  13. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  14. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  15. Does the interpersonal model apply across eating disorder diagnostic groups? A structural equation modeling approach.

    Science.gov (United States)

    Ivanova, Iryna V; Tasca, Giorgio A; Proulx, Geneviève; Bissada, Hany

    2015-11-01

    Interpersonal model has been validated with binge-eating disorder (BED), but it is not yet known if the model applies across a range of eating disorders (ED). The goal of this study was to investigate the validity of the interpersonal model in anorexia nervosa (restricting type; ANR and binge-eating/purge type; ANBP), bulimia nervosa (BN), BED, and eating disorder not otherwise specified (EDNOS). Data from a cross-sectional sample of 1459 treatment-seeking women diagnosed with ANR, ANBP, BN, BED and EDNOS were examined for indirect effects of interpersonal problems on ED psychopathology mediated through negative affect. Findings from structural equation modeling demonstrated the mediating role of negative affect in four of the five diagnostic groups. There were significant, medium to large (.239, .558), indirect effects in the ANR, BN, BED and EDNOS groups but not in the ANBP group. The results of the first reverse model of interpersonal problems as a mediator between negative affect and ED psychopathology were nonsignificant, suggesting the specificity of these hypothesized paths. However, in the second reverse model ED psychopathology was related to interpersonal problems indirectly through negative affect. This is the first study to find support for the interpersonal model of ED in a clinical sample of women with diverse ED diagnoses, though there may be a reciprocal relationship between ED psychopathology and relationship problems through negative affect. Negative affect partially explains the relationship between interpersonal problems and ED psychopathology in women diagnosed with ANR, BN, BED and EDNOS. Interpersonal psychotherapies for ED may be addressing the underlying interpersonal-affective difficulties, thereby reducing ED psychopathology. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Theoretical modeling of electroosmotic flow in soft microchannels: A variational approach applied to the rectangular geometry

    Science.gov (United States)

    Sadeghi, Arman

    2018-03-01

    Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.

  17. Fuel moisture content estimation: a land-surface modelling approach applied to African savannas

    Science.gov (United States)

    Ghent, D.; Spessa, A.; Kaduk, J.; Balzter, H.

    2009-04-01

    Despite the importance of fire to the global climate system, in terms of emissions from biomass burning, ecosystem structure and function, and changes to surface albedo, current land-surface models do not adequately estimate key variables affecting fire ignition and propagation. Fuel moisture content (FMC) is considered one of the most important of these variables (Chuvieco et al., 2004). Biophysical models, with appropriate plant functional type parameterisations, are the most viable option to adequately predict FMC over continental scales at high temporal resolution. However, the complexity of plant-water interactions, and the variability associated with short-term climate changes, means it is one of the most difficult fire variables to quantify and predict. Our work attempts to resolve this issue using a combination of satellite data and biophysical modelling applied to Africa. The approach we take is to represent live FMC as a surface dryness index; expressed as the ratio between the Normalised Difference Vegetation Index (NDVI) and land-surface temperature (LST). It has been argued in previous studies (Sandholt et al., 2002; Snyder et al., 2006), that this ratio displays a statistically stronger correlation to FMC than either of the variables, considered separately. In this study, simulated FMC is constrained through the assimilation of remotely sensed LST and NDVI data into the land-surface model JULES (Joint-UK Land Environment Simulator). Previous modelling studies of fire activity in Africa savannas, such as Lehsten et al. (2008), have reported significant levels of uncertainty associated with the simulations. This uncertainty is important because African savannas are among some of the most frequently burnt ecosystems and are a major source of greenhouse trace gases and aerosol emissions (Scholes et al., 1996). Furthermore, regional climate model studies indicate that many parts of the African savannas will experience drier and warmer conditions in future

  18. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  19. Developing scenarios to assess future landslide risks: a model-based approach applied to mountainous regions

    Science.gov (United States)

    Vacquie, Laure; Houet, Thomas

    2016-04-01

    In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging

  20. Equivalent electrical network model approach applied to a double acting low temperature differential Stirling engine

    International Nuclear Information System (INIS)

    Formosa, Fabien; Badel, Adrien; Lottin, Jacques

    2014-01-01

    Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications

  1. Capturing ecology in modeling approaches applied to environmental risk assessment of endocrine active chemicals in fish.

    Science.gov (United States)

    Mintram, Kate S; Brown, A Ross; Maynard, Samuel K; Thorbek, Pernille; Tyler, Charles R

    2018-02-01

    Endocrine active chemicals (EACs) are widespread in freshwater environments and both laboratory and field based studies have shown reproductive effects in fish at environmentally relevant exposures. Environmental risk assessment (ERA) seeks to protect wildlife populations and prospective assessments rely on extrapolation from individual-level effects established for laboratory fish species to populations of wild fish using arbitrary safety factors. Population susceptibility to chemical effects, however, depends on exposure risk, physiological susceptibility, and population resilience, each of which can differ widely between fish species. Population models have significant potential to address these shortfalls and to include individual variability relating to life-history traits, demographic and density-dependent vital rates, and behaviors which arise from inter-organism and organism-environment interactions. Confidence in population models has recently resulted in the EU Commission stating that results derived from reliable models may be considered when assessing the relevance of adverse effects of EACs at the population level. This review critically assesses the potential risks posed by EACs for fish populations, considers the ecological factors influencing these risks and explores the benefits and challenges of applying population modeling (including individual-based modeling) in ERA for EACs in fish. We conclude that population modeling offers a way forward for incorporating greater environmental relevance in assessing the risks of EACs for fishes and for identifying key risk factors through sensitivity analysis. Individual-based models (IBMs) allow for the incorporation of physiological and behavioral endpoints relevant to EAC exposure effects, thus capturing both direct and indirect population-level effects.

  2. Economic and ecological impacts of bioenergy crop production—a modeling approach applied in Southwestern Germany

    Directory of Open Access Journals (Sweden)

    Hans-Georg Schwarz-v. Raumer

    2017-03-01

    Full Text Available This paper considers scenarios of cultivating energy crops in the German Federal State of Baden-Württemberg to identify potentials and limitations of a sustainable bioenergy production. Trade-offs are analyzed among income and production structure in agriculture, bioenergy crop production, greenhouse gas emissions, and the interests of soil, water and species habitat protection. An integrated modelling approach (IMA was implemented coupling ecological and economic models in a model chain. IMA combines the Economic Farm Emission Model (EFEM; key input: parameter sets on farm production activities, the Environmental Policy Integrated Climate model (EPIC; key input: parameter sets on environmental cropping effects and GIS geo-processing models. EFEM is a supply model that maximizes total gross margins on farm level with simultaneous calculation of greenhouse gas emission from agriculture production. Calculations by EPIC result in estimates for soil erosion by water, nitrate leaching, Soil Organic Carbon and greenhouse gas emissions from soil. GIS routines provide land suitability analyses, scenario settings concerning nature conservation and habitat models for target species and help to enable spatial explicit results. The model chain is used to calculate scenarios representing different intensities of energy crop cultivation. To design scenarios which are detailed and in step to practice, comprehensive data research as well as fact and effect analyses were carried out. The scenarios indicate that, not in general but when considering specific farm types, energy crop share extremely increases if not restricted and leads to an increase in income. If so this leads to significant increase in soil erosion by water, nitrate leaching and greenhouse gas emissions. It has to be expected that an extension of nature conservation leads to an intensification of the remaining grassland and of the arable land, which were not part of nature conservation measures

  3. Interdisciplinary approaches of transcranial magnetic stimulation applied to a respiratory neuronal circuitry model.

    Directory of Open Access Journals (Sweden)

    Stéphane Vinit

    Full Text Available Respiratory related diseases associated with the neuronal control of breathing represent life-threatening issues and to date, no effective therapeutics are available to enhance the impaired function. The aim of this study was to determine whether a preclinical respiratory model could be used for further studies to develop a non-invasive therapeutic tool applied to rat diaphragmatic neuronal circuitry. Transcranial magnetic stimulation (TMS was performed on adult male Sprague-Dawley rats using a human figure-of-eight coil. The largest diaphragmatic motor evoked potentials (MEPdia were recorded when the center of the coil was positioned 6 mm caudal from Bregma, involving a stimulation of respiratory supraspinal pathways. Magnetic shielding of the coil with mu metal reduced magnetic field intensities and improved focality with increased motor threshold and lower amplitude recruitment curve. Moreover, transynaptic neuroanatomical tracing with pseudorabies virus (applied to the diaphragm suggest that connections exist between the motor cortex, the periaqueductal grey cell regions, several brainstem neurons and spinal phrenic motoneurons (distributed in the C3-4 spinal cord. These results reveal the anatomical substrate through which supraspinal stimulation can convey descending action potential volleys to the spinal motoneurons (directly or indirectly. We conclude that MEPdia following a single pulse of TMS can be successfully recorded in the rat and may be used in the assessment of respiratory supraspinal plasticity. Supraspinal non-invasive stimulations aimed to neuromodulate respiratory circuitry will enable new avenues of research into neuroplasticity and the development of therapies for respiratory dysfunction associated with neural injury and disease (e.g. spinal cord injury, amyotrophic lateral sclerosis.

  4. Effective site-energy model: A thermodynamic approach applied to size-mismatched alloys

    Science.gov (United States)

    Berthier, F.; Creuze, J.; Legrand, B.

    2017-06-01

    We present a novel energetic model that takes into account atomistic relaxations to describe the thermodynamic properties of AcB1 -c binary alloys. It requires the calculation of the energies on each site of a random solid solution after relaxation as a function of both the local composition and the nominal concentration. These site energies are obtained by molecular static simulations using N -body interatomic potentials derived from the second-moment approximation (SMA) of the tight-binding scheme. This new model allows us to determine the effective pair interactions (EPIs) that drive the short-range order (SRO) and to analyze the relative role of the EPIs' contribution to the mixing enthalpy, with respect to the contribution due to the lattice mismatch between the constituents. We apply this formalism to Au-Ni and Ag-Cu alloys, both of them tending to phase separate in the bulk and exhibiting a large size mismatch. Rigid-lattice Monte Carlo (MC) simulations lead to phase diagrams that are in good agreement with both those obtained by off-lattice SMA-MC simulations and the experimental ones. While the phase diagrams of Au-Ni and Ag-Cu alloys are very similar, we show that phase separation is mainly driven by the elastic contribution for Au-Ni and by the EPIs' contribution for Ag-Cu. Furthermore, for Au-Ni, the analysis of the SRO shows an inversion between the tendency to order and the tendency to phase separate as a function of the concentration.

  5. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  6. Applying of an Ontology based Modeling Approach to Cultural Heritage Systems

    Directory of Open Access Journals (Sweden)

    POPOVICI, D.-M.

    2011-08-01

    Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.

  7. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    Science.gov (United States)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool

  8. Applying the age-shift approach to model responses to midrotation fertilization

    Science.gov (United States)

    Colleen A. Carlson; Thomas R. Fox; H. Lee Allen; Timothy J. Albaugh

    2010-01-01

    Growth and yield models used to evaluate midrotation fertilization economics require adjustments to account for the typically observed responses. This study investigated the use of age-shift models to predict midrotation fertilizer responses. Age-shift prediction models were constructed from a regional study consisting of 43 installations of a nitrogen (N) by...

  9. Bio-economic modeling of water quality improvements using a dynamic applied general equilibrium approach

    NARCIS (Netherlands)

    Dellink, R.; Brouwer, R.; Linderhof, V.G.M.; Stone, K.

    2011-01-01

    An integrated bio-economic model is developed to assess the impacts of pollution reduction policies on water quality and the economy. Emission levels of economic activities to water are determined based on existing environmental accounts. These emission levels are built into a dynamic economic model

  10. Extension of Petri Nets by Aspects to Apply the Model Driven Architecture Approach

    NARCIS (Netherlands)

    Roubtsova, E.E.; Aksit, Mehmet

    2005-01-01

    Within MDA models are usually created in the UML. However, one may prefer to use different notations such as Petri-nets, for example, for modelling concurrency and synchronization properties of systems. This paper claims that techniques that are adopted within the context of MDA can also be

  11. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Science.gov (United States)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  12. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Davies, Laura; Jakob, Christian; Cheung, K.; Del Genio, Anthony D.; Hill, Adrian; Hume, Timothy; Keane, R. J.; Komori, T.; Larson, Vincent E.; Lin, Yanluan; Liu, Xiaohong; Nielsen, Brandon J.; Petch, Jon C.; Plant, R. S.; Singh, M. S.; Shi, Xiangjun; Song, X.; Wang, Weiguo; Whitall, M. A.; Wolf, A.; Xie, Shaocheng; Zhang, Guang J.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimate simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.

  13. Chemical, spectroscopic, and ab initio modelling approach to interfacial reactivity applied to anion retention by siderite

    International Nuclear Information System (INIS)

    Badaut, V.

    2010-07-01

    Among the many radionuclides contained in high-level nuclear waste, 79 Se was identified as a potential threat to the safety of long term underground storage. However, siderite (FeCO 3 ) is known to form upon corrosion of the waste container, and the impact of this mineral on the fate of selenium was not accounted for. In this work, the interactions between selenium oxyanions - selenate and selenite - and siderite were investigated. To this end, both experimental characterizations (solution chemistry, X-ray Absorption Spectroscopy - XAS) and theoretical studies (ab initio modelling using Density Functional Theory - DFT ) were performed. Selenite and selenate (≤ 10 3 M) retention experiments by siderite suspensions (75 g/L ) at neutral pH in reducing glovebox (5 % H 2 ) showed that selenite is quantitatively immobilized by siderite after 48 h of reaction time, when selenate is only partly immobilized after 10 days. In the selenite case, XAS showed that immobilized selenium is initially present as Se(IV) probably sorbed on siderite surface. After 10 days of reaction, selenite ions are quantitatively reduced and form poorly crystalline elementary selenium. Selenite retention and reduction kinetics are therefore distinct. On the other hand, the fraction of immobilized selenate retained in the solid fraction does not appear to be significantly reduced over the probed timescale (10 days). For a better understanding of the reduction mechanism of selenite ions by siderite, the properties of bulk and perfect surfaces of siderite were modelled using DFT. We suggest that the properties of the valence electrons can be correctly described only if the symmetry of the fundamental state electronic density is lower than the experimental crystallographic symmetry. We then show that the retention of simple molecules as O 2 or H 2 O on siderite and magnesite (10 -14 ) perfect surfaces (perfect cleavage plane, whose surface energy is the lowest according to DFT) can be modelled with

  14. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    baroreceptor feedback regulation of heart rate during head-up tilt. The three methods include: structured analysis of the correlation matrix, analysis via singular value decomposition followed by QR factorization, and identification of the subspace closest to the one spanned by eigenvectors of the model...... Hessian. Results showed that all three methods facilitate identification of a parameter subset. The “best” subset was obtained using the structured correlation method, though this method was also the most computationally intensive. Subsets obtained using the other two methods were easier to compute...

  15. A multi-layered approach to product architecture modeling: Applied to technology prototypes

    DEFF Research Database (Denmark)

    Ravn, Poul Martin; Guðlaugsson, Tómas Vignir; Mortensen, Niels Henrik

    2016-01-01

    , added functions, or material savings, the prototype development can be hard to manage. In this article, two contributions are made. The first adds to the vocabulary of prototyping, defining technology prototype, a prototype used for testing a novel technology in the context of an existing product......–private partnership project to support the development of technology prototypes using electro-active polymer transducer technology. The findings showed that the TePPAT supported the development teams in the four cases. It is concluded that the TePPAT can support multidisciplinary development teams in modeling...

  16. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  17. coupled experimental-modeling approach for estimation of root zone leaching of applied irrigation water and fertilizers

    Science.gov (United States)

    Kandelous, M.; Moradi, A. B.; Hopmans, J. W.; Burger, M.

    2012-12-01

    Micro-irrigation methods have proven to be highly effective in achieving the desired crop yields, but there is increasing evidence suggesting the need for the optimization of irrigation scheduling and management, thereby achieving sustainable agricultural practices, while minimizing losses of applied water and fertilizers at the field scale. Moreover, sustainable irrigation systems must maintain a long-term salt balance that minimizes both salinity impacts on crop production and salt leaching to the groundwater. To optimize cropping system efficiency and irrigation/fertigation practices, irrigation and fertilizers must be applied at the right concentration, place, and time to ensure maximum root uptake. However, the applied irrigation water and dissolved fertilizer, as well as root growth and associated nutrient and water uptake, interact with soil properties and nutrient sources in a complex manner that cannot easily be resolved with 'experience' and field experimentation alone. Therefore, a coupling of experimentation and modeling is required to unravel the complexities resulting from spatial variations of soil texture and layering often found in agricultural fields. We present experimental approaches that provide the necessary data on soil moisture, water potential, and nitrate concentration and multi-dimensional modeling of unsaturated water flow and solute transport to evaluate and optimize irrigation and fertility management practices for multiple locations, crop types, and irrigation systems.

  18. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  19. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    Directory of Open Access Journals (Sweden)

    E. Larour

    2016-11-01

    Full Text Available Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly. However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1 carry out type changing through the ISSM, hence facilitating operator overloading, (2 bind to external solvers such as MUMPS and GSL-LU, and (3 handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  20. Applied impulsive mathematical models

    CERN Document Server

    Stamova, Ivanka

    2016-01-01

    Using the theory of impulsive differential equations, this book focuses on mathematical models which reflect current research in biology, population dynamics, neural networks and economics. The authors provide the basic background from the fundamental theory and give a systematic exposition of recent results related to the qualitative analysis of impulsive mathematical models. Consisting of six chapters, the book presents many applicable techniques, making them available in a single source easily accessible to researchers interested in mathematical models and their applications. Serving as a valuable reference, this text is addressed to a wide audience of professionals, including mathematicians, applied researchers and practitioners.

  1. Filling gaps in notification data: a model-based approach applied to travel related campylobacteriosis cases in New Zealand.

    Science.gov (United States)

    Amene, E; Horn, B; Pirie, R; Lake, R; Döpfer, D

    2016-09-06

    Data containing notified cases of disease are often compromised by incomplete or partial information related to individual cases. In an effort to enhance the value of information from enteric disease notifications in New Zealand, this study explored the use of Bayesian and Multiple Imputation (MI) models to fill risk factor data gaps. As a test case, overseas travel as a risk factor for infection with campylobacteriosis has been examined. Two methods, namely Bayesian Specification (BAS) and Multiple Imputation (MI), were compared regarding predictive performance for various levels of artificially induced missingness of overseas travel status in campylobacteriosis notification data. Predictive performance of the models was assessed through the Brier Score, the Area Under the ROC Curve and the Percent Bias of regression coefficients. Finally, the best model was selected and applied to predict missing overseas travel status of campylobacteriosis notifications. While no difference was observed in the predictive performance of the BAS and MI methods at a lower rate of missingness (campylobacteriosis cases was estimated at 0.16 (0.02, 0.48). The use of BAS offers a flexible approach to data augmentation particularly when the missing rate is very high and when the Missing At Random (MAR) assumption holds. High rates of travel associated cases in urban regions of New Zealand predicted by this approach are plausible given the high rate of travel in these regions, including destinations with higher risk of infection. The added advantage of using a Bayesian approach is that the model's prediction can be improved whenever new information becomes available.

  2. A holistic approach combining factor analysis, positive matrix factorization, and chemical mass balance applied to receptor modeling.

    Science.gov (United States)

    Selvaraju, N; Pushpavanam, S; Anu, N

    2013-12-01

    Rapid urbanization and population growth resulted in severe deterioration of air quality in most of the major cities in India. Therefore, it is essential to ascertain the contribution of various sources of air pollution to enable us to determine effective control policies. The present work focuses on the holistic approach of combining factor analysis (FA), positive matrix factorization (PMF), and chemical mass balance (CMB) for receptor modeling in order to identify the sources and their contributions in air quality studies. Insight from the emission inventory was used to remove subjectivity in source identification. Each approach has its own limitations. Factor analysis can identify qualitatively a minimal set of important factors which can account for the variations in the measured data. This step uses information from emission inventory to qualitatively match source profiles with factor loadings. This signifies the identification of dominant sources through factors. PMF gives source profiles and source contributions from the entire receptor data matrix. The data from FA is applied for rank reduction in PMF. Whenever multiple solutions exist, emission inventory identifies source profiles uniquely, so that they have a physical relevance. CMB identifies the source contributions obtained from FA and PMF. The novel approach proposed here overcomes the limitations of the individual methods in a synergistic way. The adopted methodology is found valid for a synthetic data and also the data of field study.

  3. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  4. multi-scale data assimilation approaches and error characterisation applied to the inverse modelling of atmospheric constituent emission fields

    International Nuclear Information System (INIS)

    Koohkan, Mohammad Reza

    2012-01-01

    Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds

  5. Applying Regression Models with Mixed Frequency Data in Modeling and Prediction of Iran's Wheat Import Value (Generalized OLS-based ARDL Approach

    Directory of Open Access Journals (Sweden)

    mitra jalerajabi

    2014-10-01

    Full Text Available Due to the importance of the import management, this study applies generalized ARDL approach to estimate MIDAS regression for wheat import value and to compare the accuracy of forecasts with those competed by the regression with adjusted data model. Mixed frequency sampling models aim to extract information with high frequency indicators so that independent variables with lower frequencies are modeled and foorcasted. Due to a more precise identification of the relationships among the variables, more accurate prediction is expected. Based on the results of both estimated regression with adjusted frequency models and MIDAS for the years 1978-2003 as a training period, wheat import value with internal products and exchange rate was positively related, while the relative price variable had an adverse relation with the Iran's wheat import value. Based on the results from the conventional statistics such as RMSE, MAD, MAPE and the statistical significance, MIDAS models using data sets of annual wheat import value, internal products, relative price and seasonal exchange rate significantly improves prediction of annual wheat import value for the years2004-2008 as a testing period. Hence, it is recommended that applying prediction approaches with mixed data improves modeling and prediction of agricultural import value, especially for strategic import products.

  6. Facet Approach to Applied Research.

    Science.gov (United States)

    Canter, David

    1982-01-01

    The contribution of facet theory to applied psychological research is shown to be its ability to define problems and the solutions to them in terms relevant to those wishing to make practical use of research findings. Three examples illustrate the use of facet theory in applied research. (Author/CM)

  7. A model invalidation-based approach for elucidating biological signalling pathways, applied to the chemotaxis pathway in R. sphaeroides.

    Science.gov (United States)

    Roberts, Mark A J; August, Elias; Hamadeh, Abdullah; Maini, Philip K; McSharry, Patrick E; Armitage, Judith P; Papachristodoulou, Antonis

    2009-10-31

    Developing methods for understanding the connectivity of signalling pathways is a major challenge in biological research. For this purpose, mathematical models are routinely developed based on experimental observations, which also allow the prediction of the system behaviour under different experimental conditions. Often, however, the same experimental data can be represented by several competing network models. In this paper, we developed a novel mathematical model/experiment design cycle to help determine the probable network connectivity by iteratively invalidating models corresponding to competing signalling pathways. To do this, we systematically design experiments in silico that discriminate best between models of the competing signalling pathways. The method determines the inputs and parameter perturbations that will differentiate best between model outputs, corresponding to what can be measured/observed experimentally. We applied our method to the unknown connectivities in the chemotaxis pathway of the bacterium Rhodobacter sphaeroides. We first developed several models of R. sphaeroides chemotaxis corresponding to different signalling networks, all of which are biologically plausible. Parameters in these models were fitted so that they all represented wild type data equally well. The models were then compared to current mutant data and some were invalidated. To discriminate between the remaining models we used ideas from control systems theory to determine efficiently in silico an input profile that would result in the biggest difference in model outputs. However, when we applied this input to the models, we found it to be insufficient for discrimination in silico. Thus, to achieve better discrimination, we determined the best change in initial conditions (total protein concentrations) as well as the best change in the input profile. The designed experiments were then performed on live cells and the resulting data used to invalidate all but one of the

  8. A model invalidation-based approach for elucidating biological signalling pathways, applied to the chemotaxis pathway in R. sphaeroides

    Directory of Open Access Journals (Sweden)

    Hamadeh Abdullah

    2009-10-01

    Full Text Available Abstract Background Developing methods for understanding the connectivity of signalling pathways is a major challenge in biological research. For this purpose, mathematical models are routinely developed based on experimental observations, which also allow the prediction of the system behaviour under different experimental conditions. Often, however, the same experimental data can be represented by several competing network models. Results In this paper, we developed a novel mathematical model/experiment design cycle to help determine the probable network connectivity by iteratively invalidating models corresponding to competing signalling pathways. To do this, we systematically design experiments in silico that discriminate best between models of the competing signalling pathways. The method determines the inputs and parameter perturbations that will differentiate best between model outputs, corresponding to what can be measured/observed experimentally. We applied our method to the unknown connectivities in the chemotaxis pathway of the bacterium Rhodobacter sphaeroides. We first developed several models of R. sphaeroides chemotaxis corresponding to different signalling networks, all of which are biologically plausible. Parameters in these models were fitted so that they all represented wild type data equally well. The models were then compared to current mutant data and some were invalidated. To discriminate between the remaining models we used ideas from control systems theory to determine efficiently in silico an input profile that would result in the biggest difference in model outputs. However, when we applied this input to the models, we found it to be insufficient for discrimination in silico. Thus, to achieve better discrimination, we determined the best change in initial conditions (total protein concentrations as well as the best change in the input profile. The designed experiments were then performed on live cells and the resulting

  9. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Ziras, Charalampos

    2016-01-01

    and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators...... management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions...

  10. Integrated Case-Based Applied Pathology (ICAP): a diagnostic-approach model for the learning and teaching of veterinary pathology.

    Science.gov (United States)

    Krockenberger, Mark B; Bosward, Katrina L; Canfield, Paul J

    2007-01-01

    Integrative Case-Based Applied Pathology (ICAP) cases form one component of learning and understanding the role of pathology in the veterinary diagnostic process at the Faculty of Veterinary Science, University of Sydney. It is a strategy that focuses on student-centered learning in a problem-solving context in the year 3 curriculum. Learning exercises use real case material and are primarily delivered online, providing flexibility for students with differing learning needs, who are supported by online, peer, and tutor support. The strategy relies heavily on the integration of pre-clinical and para-clinical information with the introduction of clinical material for the purposes of a logical three-level, problem-oriented approach to the diagnosis of disease. The focus is on logical diagnostic problem solving, primarily using gross pathology and histopathological material, with the inclusion of microbiological, parasitological, and clinical pathological data. The ICAP approach is linked to and congruent with the problem-oriented approach adopted in veterinary medicine and the case-based format used by one of the authors (PJC) for the teaching and learning of veterinary clinical pathology in year 4. Additionally, final-year students have the opportunity, during a diagnostic pathology rotation, to assist in the development and refinement of further ICAPs, which reinforces the importance of pathology in the veterinary diagnostic process. Evidence of the impact of the ICAP approach, based primarily on student surveys and staff peer feedback collected over five years, shows that discipline-specific learning, vertical and horizontal integration, alignment of learning outcomes and assessment, and both veterinary and generic graduate attributes were enhanced. Areas for improvement were identified in the approach, most specifically related to assistance in the development of generic teamwork skills.

  11. Applying a system approach to forecast the total hepatitis C virus-infected population size: model validation using US data.

    Science.gov (United States)

    Kershenobich, David; Razavi, Homie A; Cooper, Curtis L; Alberti, Alfredo; Dusheiko, Geoffrey M; Pol, Stanislas; Zuckerman, Eli; Koike, Kazuhiko; Han, Kwang-Hyub; Wallace, Carolyn M; Zeuzem, Stefan; Negro, Francesco

    2011-07-01

    Hepatitis C virus (HCV) infection is associated with chronic progressive liver disease. Its global epidemiology is still not well ascertained and its impact will be confronted with a higher burden in the next decade. The goal of this study was to develop a tool that can be used to predict the future prevalence of the disease in different countries and, more importantly, to understand the cause and effect relationship between the key assumptions and future trends. A system approach was used to build a simulation model where each population was modeled with the appropriate inflows and outflows. Sensitivity analysis was used to identify the key drivers of future prevalence. The total HCV-infected population in the US was estimated to decline 24% from 3.15 million in 2005 to 2.47 million in 2021, while disease burden will increase as the remaining infected population ages. During the same period, the mortality rate was forecasted to increase from 2.1 to 3.1%. The diagnosed population was 50% of the total infections, while less than 2% of the total infections were treated. We have created a framework to evaluate the HCV-infected populations in countries around the world. This model may help assess the impact of policies to meet the challenges predicted by the evolution of HCV infection and disease. This prediction tool may help to target new public health strategies. © 2011 John Wiley & Sons A/S.

  12. A Markovian Approach Applied to Reliability Modeling of Bidirectional DC-DC Converters Used in PHEVs and Smart Grids

    Directory of Open Access Journals (Sweden)

    M. Khalilzadeh

    2016-12-01

    Full Text Available In this paper, a stochastic approach is proposed for reliability assessment of bidirectional DC-DC converters, including the fault-tolerant ones. This type of converters can be used in a smart DC grid, feeding DC loads such as home appliances and plug-in hybrid electric vehicles (PHEVs. The reliability of bidirectional DC-DC converters is of such an importance, due to the key role of the expected increasingly utilization of DC grids in modern Smart Grid. Markov processes are suggested for reliability modeling and consequently calculating the expected effective lifetime of bidirectional converters. A three-leg bidirectional interleaved converter using data of Toyota Prius 2012 hybrid electric vehicle is used as a case study. Besides, the influence of environment and ambient temperature on converter lifetime is studied. The impact of modeling the reliability of the converter and adding reliability constraints on the technical design procedure of the converter is also investigated. In order to investigate the effect of leg increase on the lifetime of the converter, single leg to five-leg interleave DC-DC converters are studied considering economical aspect and the results are extrapolated for six and seven-leg converters. The proposed method could be generalized so that the number of legs and input and output capacitors could be an arbitrary number.

  13. Applying a System Dynamics Approach for Modeling Groundwater Dynamics to Depletion under Different Economical and Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Hamid Balali

    2015-09-01

    Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.

  14. Applying an Inverse Model to Estimate Ammonia Emissions at Cattle Feedlots Using Three Different Observation-Based Approaches

    Science.gov (United States)

    Shonkwiler, K. B.; Ham, J. M.; Nash, C.

    2014-12-01

    from the inverse model (FIDES) using all three datasets will be compared to emissions from the bLS model (WindTrax) using only high speed data (laser; CRDS). Results may lend further validity to the conditional sampler approach for more easily and accurately monitoring NH3 fluxes from CAFOs and other strong areal sources.

  15. Evaluating tidal marsh sustainability in the face of sea-level rise: a hybrid modeling approach applied to San Francisco Bay.

    Science.gov (United States)

    Stralberg, Diana; Brennan, Matthew; Callaway, John C; Wood, Julian K; Schile, Lisa M; Jongsomjit, Dennis; Kelly, Maggi; Parker, V Thomas; Crooks, Stephen

    2011-01-01

    Tidal marshes will be threatened by increasing rates of sea-level rise (SLR) over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities. Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios. Model results indicated that under a high rate of SLR (1.65 m/century), short-term restoration of diked subtidal baylands to mid marsh elevations (-0.2 m MHHW) could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss). Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats. Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise elevations, and concentrating restoration efforts in sediment-rich areas. To assist land

  16. From basic physics to mechanisms of toxicity: the ``liquid drop'' approach applied to develop predictive classification models for toxicity of metal oxide nanoparticles

    Science.gov (United States)

    Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy

    2014-10-01

    Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the ``liquid drop'' model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were

  17. Smart Kd-values, their uncertainties and sensitivities - Applying a new approach for realistic distribution coefficients in geochemical modeling of complex systems.

    Science.gov (United States)

    Stockmann, M; Schikora, J; Becker, D-A; Flügge, J; Noseck, U; Brendler, V

    2017-11-01

    One natural retardation process to be considered in risk assessment for contaminants in the environment is sorption on mineral surfaces. A realistic geochemical modeling is of high relevance in many application areas such as groundwater protection, environmental remediation, or disposal of hazardous waste. Most often concepts with constant distribution coefficients (K d -values) are applied in geochemical modeling with the advantage to be simple and computationally fast, but not reflecting changes in geochemical conditions. In this paper, we describe an innovative and efficient method, where the smart K d -concept, a mechanistic approach mainly based on surface complexation modeling, is used (and modified for complex geochemical models) to calculate and apply realistic distribution coefficients. Using the geochemical speciation code PHREEQC, multidimensional smart K d -matrices are computed as a function of varying (or uncertain) environmental conditions. On the one hand, sensitivity and uncertainty statements for the distribution coefficients can be derived. On the other hand, smart K d -matrices can be used in reactive transport (or migration) codes (not shown here). This strategy has various benefits: (1) rapid computation of K d -values for large numbers of environmental parameter combinations; (2) variable geochemistry is taken into account more realistically; (3) efficiency in computing time is ensured, and (4) uncertainty and sensitivity analysis are accessible. Results are presented exemplarily for the sorption of uranium(VI) onto a natural sandy aquifer material and are compared to results based on the conventional K d -concept. In general, the sorption behavior of U(VI) in dependence of changing geochemical conditions is described quite well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  19. Evaluating Tidal Marsh Sustainability in the Face of Sea-Level Rise: A Hybrid Modeling Approach Applied to San Francisco Bay

    Science.gov (United States)

    Stralberg, Diana; Brennan, Matthew; Callaway, John C.; Wood, Julian K.; Schile, Lisa M.; Jongsomjit, Dennis; Kelly, Maggi; Parker, V. Thomas; Crooks, Stephen

    2011-01-01

    Background Tidal marshes will be threatened by increasing rates of sea-level rise (SLR) over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities. Methodology Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios. Principal Findings Model results indicated that under a high rate of SLR (1.65 m/century), short-term restoration of diked subtidal baylands to mid marsh elevations (−0.2 m MHHW) could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss). Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats. Conclusions/Significance Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise elevations, and

  20. Evaluating tidal marsh sustainability in the face of sea-level rise: a hybrid modeling approach applied to San Francisco Bay.

    Directory of Open Access Journals (Sweden)

    Diana Stralberg

    Full Text Available Tidal marshes will be threatened by increasing rates of sea-level rise (SLR over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities.Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios.Model results indicated that under a high rate of SLR (1.65 m/century, short-term restoration of diked subtidal baylands to mid marsh elevations (-0.2 m MHHW could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss. Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats.Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise elevations, and concentrating restoration efforts in sediment-rich areas

  1. Evaluation of uncertainties originating from the different modeling approaches applied to analyze regional groundwater flow in the Tono area of Japan

    Science.gov (United States)

    Ijiri, Yuji; Saegusa, Hiromitsu; Sawada, Atsushi; Ono, Makoto; Watanabe, Kunio; Karasaki, Kenzi; Doughty, Christine; Shimo, Michito; Fumimura, Kenichi

    2009-01-01

    Qualitative evaluation of the effects of uncertainties originating from scenario development, modeling approaches, and parameter values is an important subject in the area of safety assessment for high-level nuclear waste disposal sites. In this study, regional-scale groundwater flow analyses for the Tono area, Japan were conducted using three continuous models designed to handle heterogeneous porous media. We evaluated the simulation results to quantitatively analyze uncertainties originating from modeling approaches. We found that porous media heterogeneity is the main factor which causes uncertainties. We also found that uncertainties originating from modeling approaches greatly depend on the types of hydrological structures and heterogeneity of hydraulic conductivity values in the domain assigned by modelers. Uncertainties originating from modeling approaches decrease as the amount of labor and time spent increase, and iterations between investigation and analyses increases.

  2. IRECCSEM: Evaluating Clare Basin potential for onshore carbon sequestration using magnetotelluric data (Preliminary results). New approaches applied for processing, modeling and interpretation

    Science.gov (United States)

    Campanya i Llovet, J.; Ogaya, X.; Jones, A. G.; Rath, V.

    2014-12-01

    The IRECCSEM project (www.ireccsem.ie) is a Science Foundation Ireland Investigator Project that is funded to evaluate Ireland's potential for onshore carbon sequestration in saline aquifers by integrating new electromagnetic data with existing geophysical and geological data. The main goals of the project are to determine porosity-permeability values of the potential reservoir formation as well as to evaluate the integrity of the seal formation. During the Summer of 2014 a magnetotelluric (MT) survey was carried out at the Clare basin (Ireland). A total of 140 sites were acquired including audiomagnetotelluric (AMT), broadband magnetotelluric (BBMT) and long period magnetotelluric (LMT) data. The nominal space between sites is 0.6 km for AMT sites, 1.2 km for BBMT sites and 8 km for LMT sites. To evaluate the potential for carbon sequestration of the Clare basin three advances on geophysical methodology related to electromagnetic techniques were applied. First of all, processing of the MT data was improved following the recently published ELICIT methodology. Secondly, during the inversion process, the electrical resistivity distribution of the subsurface was constrained combining three different tensor relationships: Impedances (Z), induction arrows (TIP) and multi-site horizontal magnetic transfer-functions (HMT). Results from synthetic models were used to evaluate the sensitivity and properties of each tensor relationship. Finally, a computer code was developed, which employs a stabilized least squares approach to estimate the cementation exponent in the generalized Archie law formulated by Glover (2010). This allows relating MT-derived electrical resistivity models to porosity distributions. The final aim of this procedure is to generalize the porosity - permeability values measured in the boreholes to regional scales. This methodology will contribute to the evaluation of possible sequestration targets in the study area.

  3. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    Science.gov (United States)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  4. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  5. Commercial Consolidation Model Applied to Transport Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Guilherme de Aragão, J.J.; Santos Fontes Pereira, L. dos; Yamashita, Y.

    2016-07-01

    Since the 1990s, transport concessions, including public-private partnerships (PPPs), have been increasingly adopted by governments as an alternative for financing and operations in public investments, especially in transport infrastructure. The advantage pointed out by proponents of these models lies in merging the expertise and capital of the private sector to the public interest. Several arrangements are possible and have been employed in different cases. After the duration of the first PPP contracts in transportation, many authors have analyzed the success and failure factors of partnerships. The occurrence of failures in some stages of the process can greatly encumber the public administration, incurring losses to the fiscal responsibility of the competent bodies. This article aims to propose a new commercial consolidation model applied to transport infrastructure to ensure fiscal sustainability and overcome the weaknesses of current models. Initially, a systematic review of the literature covering studies on transport concessions between 1990 and 2015 is offered, where the different approaches between various countries are compared and the critical success factors indicated in the studies are identified. In the subsequent part of the paper, an approach for the commercial consolidation of the infrastructure concessions is presented, where the concessionary is paid following a finalistic performance model, which includes the overall fiscal balance of regional growth. Finally, the papers analyses the usefulness of the model in coping with the critical success factors explained before. (Author)

  6. Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling

    OpenAIRE

    Mallet , Vivien; Sportisse , Bruno

    2006-01-01

    International audience; This paper estimates the uncertainty in the outputs of a chemistry-transport model due to physical parameterizations and numerical approximations. An ensemble of 20 simulations is generated from a reference simulation in which one key parameterization (chemical mechanism, dry deposition parameterization, turbulent closure, etc.) or one numerical approximation (grid size, splitting method, etc.) is changed at a time. Intercomparisons of the simulations and comparisons w...

  7. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    NARCIS (Netherlands)

    Postma, Maarten J.; Jit, Mark; Rozenbaum, Mark H.; Standaert, Baudouin; Tu, Hong-Anh; Hutubessy, Raymond C. W.

    2011-01-01

    Background: This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods: We identified

  8. Modeling prosody: Different approaches

    Science.gov (United States)

    Carmichael, Lesley M.

    2002-11-01

    Prosody pervades all aspects of a speech signal, both in terms of raw acoustic outcomes and linguistically meaningful units, from the phoneme to the discourse unit. It is carried in the suprasegmental features of fundamental frequency, loudness, and duration. Several models have been developed to account for the way prosody organizes speech, and they vary widely in terms of their theoretical assumptions, organizational primitives, actual procedures of application to speech, and intended use (e.g., to generate speech from text vs. to model the prosodic phonology of a language). In many cases, these models overtly contradict one another with regard to their fundamental premises or their identification of the perceptible objects of linguistic prosody. These competing models are directly compared. Each model is applied to the same speech samples. This parallel analysis allows for a critical inspection of each model and its efficacy in assessing the suprasegmental behavior of the speech. The analyses illustrate how different approaches are better equipped to account for different aspects of prosody. Viewing the models and their successes from an objective perspective allows for creative possibilities in terms of combining strengths from models which might otherwise be considered fundamentally incompatible.

  9. Applying a gaming approach to IP strategy.

    Science.gov (United States)

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.

  10. Applying the Sport Education Model to Tennis

    Science.gov (United States)

    Ayvazo, Shiri

    2009-01-01

    The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…

  11. A novel approach for modeling malaria incidence using complex categorical household data: The minimum message length (MML method applied to Indonesian data

    Directory of Open Access Journals (Sweden)

    Gerhard Visser

    2012-09-01

    Full Text Available We investigated the application of a Minimum Message Length (MML modeling approach to identify the simplest model that would explain two target malaria incidence variables: incidence in the short term and on the average longer term, in two areas in Indonesia, based on a range of ecological variables including environmental and socio-economic ones. The approach is suitable for dealing with a variety of problems such as complexity and where there are missing values in the data. It can detect weak relations, is resistant to overfittingand can show the way in which many variables, working together, contribute to explaining malaria incidence. This last point is a major strength of the method as it allows many variables to be analysed. Data were obtained at household level by questionnaire for villages in West Timor and Central Java. Data were collected on 26 variables in nine categories: stratum (a village-level variable based on the API/AMI categories, ecology, occupation, preventative measures taken, health care facilities, the immediate environment, household characteristics, socio-economic status and perception of malaria cause. Several models were used and the simplest (best model, that is the one with the minimum message length was selected for each area. The results showed that consistent predictors of malaria included combinations of ecology (coastal, preventative (clean backyard and environment (mosquito breeding place, garden and rice cultivation. The models also showed that most of the other variables were not good predictors and this is discussed in the paper. We conclude that the method has potential for identifying simple predictors of malaria and that it could be used to focus malaria management on combinations of variables rather than relying on single ones that may not be consistently reliable.

  12. An Integrated Numerical Modelling-Discrete Fracture Network Approach Applied to the Characterisation of Rock Mass Strength of Naturally Fractured Pillars

    Science.gov (United States)

    Elmo, Davide; Stead, Doug

    2010-02-01

    Naturally fractured mine pillars provide an excellent example of the importance of accurately determining rock mass strength. Failure in slender pillars is predominantly controlled by naturally occurring discontinuities, their influence diminishing with increasing pillar width, with wider pillars failing through a combination of brittle and shearing processes. To accurately simulate this behaviour by numerical modelling, the current analysis incorporates a more realistic representation of the mechanical behaviour of discrete fracture systems. This involves realistic simulation and representation of fracture networks, either as individual entities or as a collective system of fracture sets, or a combination of both. By using an integrated finite element/discrete element-discrete fracture network approach it is possible to study the failure of rock masses in tension and compression, along both existing pre-existing fractures and through intact rock bridges, and incorporating complex kinematic mechanisms. The proposed modelling approach fully captures the anisotropic and inhomogeneous effects of natural jointing and is considered to be more realistic than methods relying solely on continuum or discontinuum representation. The paper concludes with a discussion on the development of synthetic rock mass properties, with the intention of providing a more robust link between rock mass strength and rock mass classification systems.

  13. Large-scale characterization of drought pattern: a continent-wide modelling approach applied to the Australian wheatbelt--spatial and temporal trends.

    Science.gov (United States)

    Chenu, Karine; Deihimfard, Reza; Chapman, Scott C

    2013-05-01

    Plant response to drought is complex, so that traits adapted to a specific drought type can confer disadvantage in another drought type. Understanding which type(s) of drought to target is of prime importance for crop improvement. Modelling was used to quantify seasonal drought patterns for a check variety across the Australian wheatbelt, using 123 yr of weather data for representative locations and managements. Two other genotypes were used to simulate the impact of maturity on drought pattern. Four major environment types summarized the variability in drought pattern over time and space. Severe stress beginning before flowering was common (44% of occurrences), with (24%) or without (20%) relief during grain filling. High variability occurred from year to year, differing with geographical region. With few exceptions, all four environment types occurred in most seasons, for each location, management system and genotype. Applications of such environment characterization are proposed to assist breeding and research to focus on germplasm, traits and genes of interest for target environments. The method was applied at a continental scale to highly variable environments and could be extended to other crops, to other drought-prone regions around the world, and to quantify potential changes in drought patterns under future climates. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  14. SCOPE model applied for rapeseed in Spain

    NARCIS (Netherlands)

    Pardo, Nuria; Sánchez, M. Luisa; Su, Zhongbo; Pérez, Isidro A.; García, M. Angeles

    2018-01-01

    The integrated SCOPE (Soil, Canopy Observation, Photochemistry and Energy balance) model, coupling radiative transfer theory and biochemistry, was applied to a biodiesel crop grown in a Spanish agricultural area. Energy fluxes and CO2 exchange were simulated with this model for the period spanning

  15. Applying the WEAP Model to Water Resource

    DEFF Research Database (Denmark)

    Gao, Jingjing; Christensen, Per; Li, Wei

    Water resources assessment is a tool to provide decision makers with an appropriate basis to make informed judgments regarding the objectives and targets to be addressed during the Strategic Environmental Assessment (SEA) process. The study shows how water resources assessment can be applied in SEA...... in assessing the effects on water resources using a case study on a Coal Industry Development Plan in an arid region in North Western China. In the case the WEAP model (Water Evaluation And Planning System) were used to simulate various scenarios using a diversity of technological instruments like irrigation...... efficiency, treatment and reuse of water. The WEAP model was applied to the Ordos catchment where it was used for the first time in China. The changes in water resource utilization in Ordos basin were assessed with the model. It was found that the WEAP model is a useful tool for water resource assessment...

  16. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  17. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  18. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  19. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  20. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  1. Applying business management models in health care.

    Science.gov (United States)

    Trisolini, Michael G

    2002-01-01

    Most health care management training programmes and textbooks focus on only one or two models or conceptual frameworks, but the increasing complexity of health care organizations and their environments worldwide means that a broader perspective is needed. This paper reviews five management models developed for business organizations and analyses issues related to their application in health care. Three older, more 'traditional' models are first presented. These include the functional areas model, the tasks model and the roles model. Each is shown to provide a valuable perspective, but to have limitations if used in isolation. Two newer, more 'innovative' models are next discussed. These include total quality management (TQM) and reengineering. They have shown potential for enabling dramatic improvements in quality and cost, but have also been found to be more difficult to implement. A series of 'lessons learned' are presented to illustrate key success factors for applying them in health care organizations. In sum, each of the five models is shown to provide a useful perspective for health care management. Health care managers should gain experience and training with a broader set of business management models.

  2. Focus Groups: A Practical and Applied Research Approach for Counselors

    Science.gov (United States)

    Kress, Victoria E.; Shoffner, Marie F.

    2007-01-01

    Focus groups are becoming a popular research approach that counselors can use as an efficient, practical, and applied method of gathering information to better serve clients. In this article, the authors describe focus groups and their potential usefulness to professional counselors and researchers. Practical implications related to the use of…

  3. Applying incentive sensitization models to behavioral addiction.

    Science.gov (United States)

    Rømer Thomsen, Kristine; Fjorback, Lone O; Møller, Arne; Lou, Hans C

    2014-09-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  5. Applying Bayesian Approach to Combinatorial Problem in Chemistry.

    Science.gov (United States)

    Okamoto, Yasuharu

    2017-05-04

    A Bayesian optimization procedure, in combination with density functional theory calculations, was applied to a combinatorial problem in chemistry. As a specific example, we examined the stable structures of lithium-graphite intercalation compounds (Li-GICs). We found that this approach efficiently identified the stable structure of stage-I and -II Li-GICs by calculating 4-6% of the full search space. We expect that this approach will be helpful in solving problems in chemistry that can be regarded as a kind of combinatorial problem.

  6. Forecast model applied to quality control with autocorrelational data

    Directory of Open Access Journals (Sweden)

    Adriano Mendonça Souza

    2013-11-01

    Full Text Available This research approaches the prediction models applied to industrial processes, in order to check the stability of the process by means of control charts, applied to residues from linear modeling. The data used for analysis refers to the moisture content, permeability and compression resistance to the green (RCV, belonging to the casting process of green sand molding in A Company, which operates in the casting and machining, for which dynamic multivariate regression model was set. As the observations were auto-correlated, it was necessary to seek a mathematical model that produces independent and identically distribuibed residues. The models found make possible to understand the variables behavior, assisting in the achievement of the forecasts and in the monitoring of the referred process. Thus, it can be stated that the moisture content is very unstable comparing to the others variables.

  7. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  8. Applying Digital Sensor Technology: A Problem-Solving Approach

    Science.gov (United States)

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. Object Oriented Business Process Modelling in RFID Applied Computing Environments

    Science.gov (United States)

    Zhao, Xiaohui; Liu, Chengfei; Lin, Tao

    As a tracking technology, Radio Frequency Identification (RFID) is now widely applied to enhance the context awareness of enterprise information systems. Such awareness provides great opportunities to facilitate business process automation and thereby improve operation efficiency and accuracy. With the aim to incorporate business logics into RFID-enabled applications, this book chapter addresses how RFID technologies impact current business process management and the characteristics of object-oriented business process modelling. This chapter first discusses the rationality and advantages of applying object-oriented process modelling in RFID applications, then addresses the requirements and guidelines for RFID data management and process modelling. Two typical solutions are introduced to further illustrate the modelling and incorporation of business logics/business processes into RFID edge systems. To demonstrate the applicability of these two approaches, a detailed case study is conducted within a distribution centre scenario.

  11. The hybrid thermography approach applied to architectural structures

    Science.gov (United States)

    Sfarra, S.; Ambrosini, D.; Paoletti, D.; Nardi, I.; Pasqualoni, G.

    2017-07-01

    This work contains an overview of infrared thermography (IRT) method and its applications relating to the investigation of architectural structures. In this method, the passive approach is usually used in civil engineering, since it provides a panoramic view of the thermal anomalies to be interpreted also thanks to the use of photographs focused on the region of interest (ROI). The active approach, is more suitable for laboratory or indoor inspections, as well as for objects having a small size. The external stress to be applied is thermal, coming from non-natural apparatus such as lamps or hot / cold air jets. In addition, the latter permits to obtain quantitative information related to defects not detectable to the naked eyes. Very recently, the hybrid thermography (HIRT) approach has been introduced to the attention of the scientific panorama. It can be applied when the radiation coming from the sun, directly arrives (i.e., possibly without the shadow cast effect) on a surface exposed to the air. A large number of thermograms must be collected and a post-processing analysis is subsequently applied via advanced algorithms. Therefore, an appraisal of the defect depth can be obtained passing through the calculation of the combined thermal diffusivity of the materials above the defect. The approach is validated herein by working, in a first stage, on a mosaic sample having known defects while, in a second stage, on a Church built in L'Aquila (Italy) and covered with a particular masonry structure called apparecchio aquilano. The results obtained appear promising.

  12. Terahertz spectroscopy applied to food model systems

    DEFF Research Database (Denmark)

    Møller, Uffe

    Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult to differ......Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult...... to differentiate between these types of water in subsequent quality controls. This thesis describes terahertz time-domain spectroscopy applied on aqueous food model systems, with particular focus on ethanol-water mixtures and confined water pools in inverse micelles....

  13. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  14. A Cooperation Model Applied in a Kindergarten

    Directory of Open Access Journals (Sweden)

    Jose I. Rodriguez

    2011-10-01

    Full Text Available The need for collaboration in a global world has become a key factor for success for many organizations and individuals. However in several regions and organizations in the world, it has not happened yet. One of the settings where major obstacles occur for collaboration is in the business arena, mainly because of competitive beliefs that cooperation could hurt profitability. We have found such behavior in a wide variety of countries, in advanced and developing economies. Such cultural behaviors or traits characterized entrepreneurs by working in isolation, avoiding the possibilities of building clusters to promote regional development. The needs to improve the essential abilities that conforms cooperation are evident. It is also very difficult to change such conduct with adults. So we decided to work with children to prepare future generations to live in a cooperative world, so badly hit by greed and individualism nowadays. We have validated that working with children at an early age improves such behavior. This paper develops a model to enhance the essential abilities in order to improve cooperation. The model has been validated by applying it at a kindergarten school.

  15. Applying a new ensemble approach to estimating stock status of marine fisheries around the world

    DEFF Research Database (Denmark)

    Rosenberg, Andrew A.; Kleisner, Kristin M.; Afflerbach, Jamie

    2018-01-01

    The exploitation status of marine fisheries stocks worldwide is of critical importance for food security, ecosystem conservation, and fishery sustainability. Applying a suite of data-limited methods to global catch data, combined through an ensemble modeling approach, we provide quantitative...... substantial yield. Our results enable managers to consider more detailed information than simply a categorization of stocks as "fully" or "over" exploited. Our approach is reproducible, allows consistent application to a broad range of stocks, and can be easily updated as new data become available. Applied...... on an ongoing basis, this approach can provide critical, more detailed information for resource management for more exploited fish stocks than currently available....

  16. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  17. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  18. Applying a Modified Triad Approach to Investigate Wastewater lines

    International Nuclear Information System (INIS)

    Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.

    2006-01-01

    Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)

  19. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite-rheological ......This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite......-rheological model of concrete is presented by which consistent predictions of creep, relaxation, and internal stresses can be made from known concrete composition, age at loading, and climatic conditions. No other existing "creep prediction method" offers these possibilities in one approach.The model...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  20. Molecular modeling: An open invitation for applied mathematics

    Science.gov (United States)

    Mezey, Paul G.

    2013-10-01

    Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.

  1. A new kinetic biphasic approach applied to biodiesel process intensification

    Energy Technology Data Exchange (ETDEWEB)

    Russo, V.; Tesser, R.; Di Serio, M.; Santacesaria, E. [Naples Univ. (Italy). Dept. of Chemistry

    2012-07-01

    Many different papers have been published on the kinetics of the transesterification of vegetable oil with methanol, in the presence of alkaline catalysts to produce biodiesel. All the proposed approaches are based on the assumption of a pseudo-monophasic system. The consequence of these approaches is that some experimental aspects cannot be described. For the reaction performed in batch conditions, for example, the monophasic approach is not able to reproduce the different plateau obtained by using different amount of catalyst or the induction time observed at low stirring rates. Moreover, it has been observed by operating in continuous reactors that micromixing has a dramatic effect on the reaction rate. At this purpose, we have recently observed that is possible to obtain a complete conversion to biodiesel in less than 10 seconds of reaction time. This observation is confirmed also by other authors using different types of reactors like: static mixers, micro-reactors, oscillatory flow reactors, cavitational reactors, microwave reactors or centrifugal contactors. In this work we will show that a recently proposed biphasic kinetic approach is able to describe all the aspects before mentioned that cannot be described by the monophasic kinetic model. In particular, we will show that the biphasic kinetic model can describe both the induction time observed in the batch reactors, at low stirring rate, and the very high conversions obtainable in a micro-channel reactor. The adopted biphasic kinetic model is based on a reliable reaction mechanism that will be validated by the experimental evidences reported in this work. (orig.)

  2. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  3. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  4. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  5. Geothermal potential assessment for a low carbon strategy: A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M.P.D.; Santilano, A.; Wees, J.D. van; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  6. Geothermal potential assessment for a low carbon strategy : A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  7. Agent-Based Modelling applied to 5D model of the HIV infection

    Directory of Open Access Journals (Sweden)

    Toufik Laroum

    2016-12-01

    The simplest model was the 3D mathematical model. But the complexity of this phenomenon and the diversity of cells and actors which affect its evolution requires the use of new approaches such as multi-agents approach that we have applied in this paper. The results of our simulator on the 5D model are promising because they are consistent with biological knowledge’s. Therefore, the proposed approach is well appropriate to the study of population dynamics in general and could help to understand and predict the dynamics of HIV infection.

  8. A stochastic root finding approach: the homotopy analysis method applied to Dyson-Schwinger equations

    Science.gov (United States)

    Pfeffer, Tobias; Pollet, Lode

    2017-04-01

    We present the construction and stochastic summation of rooted-tree diagrams, based on the expansion of a root finding algorithm applied to the Dyson-Schwinger equations. The mathematical formulation shows superior convergence properties compared to the bold diagrammatic Monte Carlo approach and the developed algorithm allows one to tackle generic high-dimensional integral equations, to avoid the curse of dealing explicitly with high-dimensional objects and to access non-perturbative regimes. The sign problem remains the limiting factor, but it is not found to be worse than in other approaches. We illustrate the method for {φ }4 theory but note that it applies in principle to any model.

  9. Applied Creativity: The Creative Marketing Breakthrough Model

    Science.gov (United States)

    Titus, Philip A.

    2007-01-01

    Despite the increasing importance of personal creativity in today's business environment, few conceptual creativity frameworks have been presented in the marketing education literature. The purpose of this article is to advance the integration of creativity instruction into marketing classrooms by presenting an applied creative marketing…

  10. Applying FEATHERS for Travel Demand Analysis: Model Considerations

    Directory of Open Access Journals (Sweden)

    Qiong Bao

    2018-01-01

    Full Text Available Activity-based models of travel demand have received considerable attention in transportation planning and forecasting over the last few decades. FEATHERS (The Forecasting Evolutionary Activity-Travel of Households and their Environmental Repercussions, developed by the Transportation Research Institute of Hasselt University, Belgium, is a micro-simulation framework developed to facilitate the implementation of activity-based models for transport demand forecasting. In this paper, we focus on several model considerations when applying this framework. First, the way to apply FEATHERS on a more disaggregated geographical level is investigated, with the purpose of obtaining more detailed travel demand information. Next, to reduce the computation time when applying FEATHERS on a more detailed geographical level, an iteration approach is proposed to identify the minimum size of the study area needed. In addition, the effect of stochastic errors inherently included in the FEATHERS framework is investigated, and the concept of confidence intervals is applied to determine the minimum number of model runs needed to minimize this effect. In the application, the FEATHERS framework is used to investigate the potential impact of light rail initiatives on travel demand at a local network in Flanders, Belgium. In doing so, all the aforementioned model considerations are taken into account. The results indicate that by integrating a light rail network into the current public transport network, there would be a relatively positive impact on public transport-related trips, but a relatively negative impact on the non-motorized-mode trips in this area. However, no significant change is found for car-related trips.

  11. The applying stakeholder approach to strategic management of territories development

    Directory of Open Access Journals (Sweden)

    Ilshat Azamatovich Tazhitdinov

    2013-06-01

    Full Text Available In the paper, the aspects of the strategic management of socioeconomic development of territories in terms of stakeholder approach are discussed. The author's interpretation of the concept of stakeholder sub-region is proposed, and their classification into internal and external to the territorial socioeconomic system of sub-regional level is offered. The types of interests and types of resources stakeholders in the sub-region are identified, and at the same time the correlation of interests and resources allows to determine the groups (alliances stakeholders, which ensure the balance of interests depending on the certain objectives of the association. The conceptual stakeholder agent model of management of strategic territorial development within the hierarchical system of «region — sub-region — municipal formation,» is proposed. All stakeholders there are considered as the influence agents directing its own resources to provide a comprehensive approach to management territorial development. The interaction between all the influence agents of the «Region — Sub-region — municipal formation» is provided vertically and horizontally through the initialization of the development and implementation of strategic documents of the sub-region. Vertical interaction occurs between stakeholders such as government and municipal authorities being as a guideline, and the horizontal — between the rests of them being as a partnership. Within the proposed model, the concurrent engineering is implemented, which is a form of inter-municipal strategic cooperation of local government municipalities for the formation and analyzing a set of alternatives of the project activities in the sub-region in order to choose the best options. The proposed approach was tested in the development of medium-term comprehensive program of socioeconomic development of the Zauralye and sub-regions of the North-East of the Republic of Bashkortostan (2011–2015.

  12. A Log Logistic Survival Model Applied to Hypobaric Decompression Sickness

    Science.gov (United States)

    Conkin, Johnny

    2001-01-01

    Decompression sickness (DCS) is a complex, multivariable problem. A mathematical description or model of the likelihood of DCS requires a large amount of quality research data, ideas on how to define a decompression dose using physical and physiological variables, and an appropriate analytical approach. It also requires a high-performance computer with specialized software. I have used published DCS data to develop my decompression doses, which are variants of equilibrium expressions for evolved gas plus other explanatory variables. My analytical approach is survival analysis, where the time of DCS occurrence is modeled. My conclusions can be applied to simple hypobaric decompressions - ascents lasting from 5 to 30 minutes - and, after minutes to hours, to denitrogenation (prebreathing). They are also applicable to long or short exposures, and can be used whether the sufferer of DCS is at rest or exercising at altitude. Ultimately I would like my models to be applied to astronauts to reduce the risk of DCS during spacewalks, as well as to future spaceflight crews on the Moon and Mars.

  13. Applying mechanistic models in bioprocess development

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita; Bodla, Vijaya Krishna; Carlquist, Magnus

    2013-01-01

    The available knowledge on the mechanisms of a bioprocess system is central to process analytical technology. In this respect, mechanistic modeling has gained renewed attention, since a mechanistic model can provide an excellent summary of available process knowledge. Such a model therefore......, experimental data from Saccharomyces cerevisiae fermentations are used. The data are described with the well-known model of Sonnleitner and Käppeli (Biotechnol Bioeng 28:927-937, 1986) and the model is analyzed further. The methods used are generic, and can be transferred easily to other, more complex case...

  14. Fractional calculus model of electrical impedance applied to human skin.

    Science.gov (United States)

    Vosika, Zoran B; Lazovic, Goran M; Misevic, Gradimir N; Simic-Krstic, Jovana B

    2013-01-01

    Fractional calculus is a mathematical approach dealing with derivatives and integrals of arbitrary and complex orders. Therefore, it adds a new dimension to understand and describe basic nature and behavior of complex systems in an improved way. Here we use the fractional calculus for modeling electrical properties of biological systems. We derived a new class of generalized models for electrical impedance and applied them to human skin by experimental data fitting. The primary model introduces new generalizations of: 1) Weyl fractional derivative operator, 2) Cole equation, and 3) Constant Phase Element (CPE). These generalizations were described by the novel equation which presented parameter [Formula: see text] related to remnant memory and corrected four essential parameters [Formula: see text] We further generalized single generalized element by introducing specific partial sum of Maclaurin series determined by parameters [Formula: see text] We defined individual primary model elements and their serial combination models by the appropriate equations and electrical schemes. Cole equation is a special case of our generalized class of models for[Formula: see text] Previous bioimpedance data analyses of living systems using basic Cole and serial Cole models show significant imprecisions. Our new class of models considerably improves the quality of fitting, evaluated by mean square errors, for bioimpedance data obtained from human skin. Our models with new parameters presented in specific partial sum of Maclaurin series also extend representation, understanding and description of complex systems electrical properties in terms of remnant memory effects.

  15. Fractional calculus model of electrical impedance applied to human skin.

    Directory of Open Access Journals (Sweden)

    Zoran B Vosika

    Full Text Available Fractional calculus is a mathematical approach dealing with derivatives and integrals of arbitrary and complex orders. Therefore, it adds a new dimension to understand and describe basic nature and behavior of complex systems in an improved way. Here we use the fractional calculus for modeling electrical properties of biological systems. We derived a new class of generalized models for electrical impedance and applied them to human skin by experimental data fitting. The primary model introduces new generalizations of: 1 Weyl fractional derivative operator, 2 Cole equation, and 3 Constant Phase Element (CPE. These generalizations were described by the novel equation which presented parameter [Formula: see text] related to remnant memory and corrected four essential parameters [Formula: see text] We further generalized single generalized element by introducing specific partial sum of Maclaurin series determined by parameters [Formula: see text] We defined individual primary model elements and their serial combination models by the appropriate equations and electrical schemes. Cole equation is a special case of our generalized class of models for[Formula: see text] Previous bioimpedance data analyses of living systems using basic Cole and serial Cole models show significant imprecisions. Our new class of models considerably improves the quality of fitting, evaluated by mean square errors, for bioimpedance data obtained from human skin. Our models with new parameters presented in specific partial sum of Maclaurin series also extend representation, understanding and description of complex systems electrical properties in terms of remnant memory effects.

  16. Applying Mathematical Models to Surgical Patient Planning

    NARCIS (Netherlands)

    J.M. van Oostrum (Jeroen)

    2009-01-01

    textabstractOn a daily basis surgeons, nurses, and managers face cancellation of surgery, peak demands on wards, and overtime in operating rooms. Moreover, the lack of an integral planning approach for operating rooms, wards, and intensive care units causes low resource utilization and makes patient

  17. Views on Montessori Approach by Teachers Serving at Schools Applying the Montessori Approach

    Science.gov (United States)

    Atli, Sibel; Korkmaz, A. Merve; Tastepe, Taskin; Koksal Akyol, Aysel

    2016-01-01

    Problem Statement: Further studies on Montessori teachers are required on the grounds that the Montessori approach, which, having been applied throughout the world, holds an important place in the alternative education field. Yet it is novel for Turkey, and there are only a limited number of studies on Montessori teachers in Turkey. Purpose of…

  18. Modelling and Applying Oss Adoption Strategies

    NARCIS (Netherlands)

    Lopez, L.; Costal, D.; Ayala, C.P.; Haaland, K.; Glott, R.

    2015-01-01

    Increasing adoption of Open Source Software (OSS) in information system engineering has led to the emergence of different OSS business strategies that affect and shape organizations’ business models. In this context, organizational modeling needs to reconcile efficiently OSS adoption strategies with

  19. Crop growth model WOFOST applied to potatoes

    NARCIS (Netherlands)

    Koning, de G.H.J.; Diepen, van C.A.; Reinds, G.J.

    1995-01-01

    The WOFOST model was calibrated with an experiment on yield effects of drought in potatoes, using data on weather, soil moisture and crop calendar. Then, crop growth and development were predicted for the next year, using planting date and weather data. The model is described. The adjustments in the

  20. Applying Olap Model On Public Finance Management

    OpenAIRE

    Dorde Pavlovic; Branko Gledovic

    2011-01-01

    Budget control is derivate from one of the main functions of budget, that aims that the budget is control instrument of acquiring and pending of budget needs. OLAP model represents an instrument that finds its place in the budget planning process, executive phases of budget, accountancy, etc. There is a direct correlation between the OLAP model and public finance management process.

  1. A suggested approach toward measuring sorption and applying sorption data to repository performance assessment

    International Nuclear Information System (INIS)

    Rundberg, R.S.

    1992-01-01

    The prediction of radionuclide migration for the purpose of assessing the safety of a nuclear waste repository will be based on a collective knowledge of hydrologic and geochemical properties of the surrounding rock and groundwater. This knowledge along with assumption about the interactions of radionuclides with groundwater and minerals form the scientific basis for a model capable of accurately predicting the repository's performance. Because the interaction of radionuclides in geochemical systems is known to be complicated, several fundamental and empirical approaches to measuring the interaction between radionuclides and the geologic barrier have been developed. The approaches applied to the measurement of sorption involve the use of pure minerals, intact, or crushed rock in dynamic and static experiments. Each approach has its advantages and disadvantages. There is no single best method for providing sorption data for performance assessment models which can be applied without invoking information derived from multiple experiments. 53 refs., 12 figs

  2. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  3. Applying Machine Trust Models to Forensic Investigations

    Science.gov (United States)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  4. Applying artificial intelligence to clinical guidelines: the GLARE approach.

    Science.gov (United States)

    Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Molino, Gianpaolo; Torchio, Mauro

    2008-01-01

    We present GLARE, a domain-independent system for acquiring, representing and executing clinical guidelines (GL). GLARE is characterized by the adoption of Artificial Intelligence (AI) techniques in the definition and implementation of the system. First of all, a high-level and user-friendly knowledge representation language has been designed. Second, a user-friendly acquisition tool, which provides expert physicians with various forms of help, has been implemented. Third, a tool for executing GL on a specific patient has been made available. At all the levels above, advanced AI techniques have been exploited, in order to enhance flexibility and user-friendliness and to provide decision support. Specifically, this chapter focuses on the methods we have developed in order to cope with (i) automatic resource-based adaptation of GL, (ii) representation and reasoning about temporal constraints in GL, (iii) decision making support, and (iv) model-based verification. We stress that, although we have devised such techniques within the GLARE project, they are mostly system-independent, so that they might be applied to other guideline management systems.

  5. Uncertainty models applied to the substation planning

    Energy Technology Data Exchange (ETDEWEB)

    Fontoura Filho, Roberto N. [ELETROBRAS, Rio de Janeiro, RJ (Brazil); Aires, Joao Carlos O.; Tortelly, Debora L.S. [Light Servicos de Eletricidade S.A., Rio de Janeiro, RJ (Brazil)

    1994-12-31

    The selection of the reinforcements for a power system expansion becomes a difficult task on an environment of uncertainties. These uncertainties can be classified according to their sources as exogenous and endogenous. The first one is associated to the elements of the generation, transmission and distribution systems. The exogenous uncertainly is associated to external aspects, as the financial resources, the time spent to build the installations, the equipment price and the load level. The load uncertainly is extremely sensible to the behaviour of the economic conditions. Although the impossibility to take out completely the uncertainty , the endogenous one can be convenient treated and the exogenous uncertainly can be compensated. This paper describes an uncertainty treatment methodology and a practical application to a group of substations belonging to LIGHT company, the Rio de Janeiro electric utility. The equipment performance uncertainty is treated by adopting a probabilistic approach. The uncertainly associated to the load increase is considered by using technical analysis of scenarios and choice criteria based on the Decision Theory. On this paper it was used the Savage Method and the Fuzzy Set Method, in order to select the best middle term reinforcements plan. (author) 7 refs., 4 figs., 6 tabs.

  6. Applying the community partnership approach to human biology research.

    Science.gov (United States)

    Ravenscroft, Julia; Schell, Lawrence M; Cole, Tewentahawih'tha'

    2015-01-01

    Contemporary human biology research employs a unique skillset for biocultural analysis. This skillset is highly appropriate for the study of health disparities because disparities result from the interaction of social and biological factors over one or more generations. Health disparities research almost always involves disadvantaged communities owing to the relationship between social position and health in stratified societies. Successful research with disadvantaged communities involves a specific approach, the community partnership model, which creates a relationship beneficial for researcher and community. Paramount is the need for trust between partners. With trust established, partners share research goals, agree on research methods and produce results of interest and importance to all partners. Results are shared with the community as they are developed; community partners also provide input on analyses and interpretation of findings. This article describes a partnership-based, 20 year relationship between community members of the Akwesasne Mohawk Nation and researchers at the University at Albany. As with many communities facing health disparity issues, research with Native Americans and indigenous peoples generally is inherently politicized. For Akwesasne, the contamination of their lands and waters is an environmental justice issue in which the community has faced unequal exposure to, and harm by environmental toxicants. As human biologists engage in more partnership-type research, it is important to understand the long term goals of the community and what is at stake so the research circle can be closed and 'helicopter' style research avoided. © 2014 Wiley Periodicals, Inc.

  7. Private healthcare quality: applying a SERVQUAL model.

    Science.gov (United States)

    Butt, Mohsin Muhammad; de Run, Ernest Cyril

    2010-01-01

    This paper seeks to develop and test the SERVQUAL model scale for measuring Malaysian private health service quality. The study consists of 340 randomly selected participants visiting a private healthcare facility during a three-month data collection period. Data were analyzed using means, correlations, principal component and confirmatory factor analysis to establish the modified SERVQUAL scale's reliability, underlying dimensionality and convergent, discriminant validity. Results indicate a moderate negative quality gap for overall Malaysian private healthcare service quality. Results also indicate a moderate negative quality gap on each service quality scale dimension. However, scale development analysis yielded excellent results, which can be used in wider healthcare policy and practice. Respondents were skewed towards a younger population, causing concern that the results might not represent all Malaysian age groups. The study's major contribution is that it offers a way to assess private healthcare service quality. Second, it successfully develops a scale that can be used to measure health service quality in Malaysian contexts.

  8. TCSC impedance regulator applied to the second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, J.P.; Dessaint, L.A. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Electrical Engineering; Champagne, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Software and IT Engineering; Pare, D. [Institut de Recherche d' Hydro-Quebec, Varennes, PQ (Canada)

    2008-07-01

    Due to the combination of electrical demand growth and the high cost of building new power transmission lines, series compensation is increasingly used in power systems all around the world. Series compensation has been proposed as a new way to transfer more power on existing lines. By adding series compensation to an existing line (a relatively small change), the power transfer can be increased significantly. One of the means used for line compensation is the addition of capacitive elements in series with the line. This paper presented a thyristor-controlled series capacitor (TCSC) model that used impedance as reference, had individual controls for each phase, included a linearization module and considered only the fundamental frequency for impedance computations, without using any filter. The model's dynamic behavior was validated by applying it to the second benchmark model for subsynchronous resonance (SSR). Simulation results from the proposed model, obtained using EMTP-RV and SimPowerSystems were demonstrated. It was concluded that SSR was mitigated by the proposed approach. 19 refs., 19 figs.

  9. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  10. Applying Meta-Analysis to Structural Equation Modeling

    Science.gov (United States)

    Hedges, Larry V.

    2016-01-01

    Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines…

  11. An applied research model for the sport sciences.

    Science.gov (United States)

    Bishop, David

    2008-01-01

    Sport science can be thought of as a scientific process used to guide the practice of sport with the ultimate aim of improving sporting performance. However, despite this goal, the general consensus is that the translation of sport-science research to practice is poor. Furthermore, researchers have been criticised for failing to study problems relevant to practitioners and for disseminating findings that are difficult to implement within a practical setting. This paper proposes that the situation may be improved by the adoption of a model that guides the direction of research required to build our evidence base about how to improve performance. Central to the Applied Research Model for the Sport Sciences (ARMSS) described in this report is the idea that only research leading to practices that can and will be adopted can improve sporting performance. The eight stages of the proposed model are (i) defining the problem; (ii) descriptive research; (iii) predictors of performance; (iv) experimental testing of predictors; (v) determinants of key performance predictors; (vi) efficacy studies; (vii) examination of barriers to uptake; and (viii) implementation studies in a real sporting setting. It is suggested that, from the very inception, researchers need to consider how their research findings might ultimately be adapted to the intended population, in the actual sporting setting, delivered by persons with diverse training and skills, and using the available resources. It is further argued in the model that a greater understanding of the literature and more mechanistic studies are essential to inform subsequent research conducted in real sporting settings. The proposed ARMSS model therefore calls for a fundamental change in the way in which many sport scientists think about the research process. While there is no guarantee that application of this proposed research model will improve actual sports performance, anecdotal evidence suggests that sport-science research is

  12. Problems of Applying the Individual Differentiated Approach in Teaching English

    Directory of Open Access Journals (Sweden)

    Madina Zh. Tussupbekova

    2011-12-01

    Full Text Available The transformation into the new multilevel system of higher education in Kazakhstan needs changing and introduction with Individual differentiated approach in teaching English. The goal and task of teaching English in the higher institutions is the practical acquiring colloquial and professional ways of speaking for active using as in real and professional conversation.

  13. Applied approach slab settlement research, design/construction : final report.

    Science.gov (United States)

    2013-08-01

    Approach embankment settlement is a pervasive problem in Oklahoma and many other states. The bump and/or abrupt slope change poses a danger to traffic and can cause increased dynamic loads on the bridge. Frequent and costly maintenance may be needed ...

  14. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    Science.gov (United States)

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  15. Dialogical Approach Applied in Group Counselling: Case Study

    Science.gov (United States)

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  16. Conflicts, development and natural resources : An applied game theoretic approach

    NARCIS (Netherlands)

    Wick, A.K.

    2008-01-01

    This thesis also provides a critical view on a part of preceding resource curse results, namely the negative association between resources and economic performance. Arguing that the empirical literature on the topic up until now has ignored serious econometric concerns, a different approach is

  17. Major accident prevention through applying safety knowledge management approach.

    Science.gov (United States)

    Kalatpour, Omid

    2016-01-01

    Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.

  18. The effective action approach applied to nuclear matter (2)

    International Nuclear Information System (INIS)

    Tran Huu Phat; Nguyen Tuan Ahn.

    1997-05-01

    Within the framework of the effective action approach we present the numerical calculations based on the approximation, in which all interacting meson propagators are replaced by their free ones. This is the Hartree-Fock (HF) improved approximation since it contains both the quantum corrections to the mean-field theory and the higher order effects the HF traditional method. (author). 6 refs, 5 figs, 1 tab

  19. A Multi-Criterion Evolutionary Approach Applied to Phylogenetic Reconstruction

    OpenAIRE

    Cancino, W.; Delbem, A.C.B.

    2010-01-01

    In this paper, we proposed an MOEA approach, called PhyloMOEA which solves the phylogenetic inference problem using maximum parsimony and maximum likelihood criteria. The PhyloMOEA's development was motivated by several studies in the literature (Huelsenbeck, 1995; Jin & Nei, 1990; Kuhner & Felsenstein, 1994; Tateno et al., 1994), which point out that various phylogenetic inference methods lead to inconsistent solutions. Techniques using parsimony and likelihood criteria yield to different tr...

  20. Applying a Problem Based Learning Approach to Land Management Education

    DEFF Research Database (Denmark)

    Enemark, Stig

    Land management covers a wide range activities associated with the management of land and natural resources that are required to fulfil political objectives and achieve sustainable development. This paper presents an overall understanding of the land management paradigm and the benefits of good...... land governance to society. A land administration system provides a country with the infrastructure to implement land-related policies and land management strategies. By applying this land management profile to surveying education, this paper suggests that there is a need to move away from an exclusive...

  1. Applying open source innovation approaches in developing business innovation

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2015-01-01

    More and more companies are pursuing continuous innovation through different types of open source innovation and across different partners. The growing interest in open innovation (OI) originates both from the academic community as well as amongst practitioners motivating further investigation...... and managed effectively in developing business model innovation. The aim of this paper is therefore to close this research gap and to provide new knowledge within the research field of OI and OI applications. Thus, in the present study we explore the facilitation and management of open source innovation...... in developing business model innovation in the context of an international OI contest across five international case companies. The findings reveal six categories of key antecedents in effective facilitation and management of OI in developing business model innovation....

  2. Cortical complexity in bipolar disorder applying a spherical harmonics approach.

    Science.gov (United States)

    Nenadic, Igor; Yotter, Rachel A; Dietzek, Maren; Langbein, Kerstin; Sauer, Heinrich; Gaser, Christian

    2017-05-30

    Recent studies using surface-based morphometry of structural magnetic resonance imaging data have suggested that some changes in bipolar disorder (BP) might be neurodevelopmental in origin. We applied a novel analysis of cortical complexity based on fractal dimensions in high-resolution structural MRI scans of 18 bipolar disorder patients and 26 healthy controls. Our region-of-interest based analysis revealed increases in fractal dimensions (in patients relative to controls) in left lateral orbitofrontal cortex and right precuneus, and decreases in right caudal middle frontal, entorhinal cortex, and right pars orbitalis, and left fusiform and posterior cingulate cortices. While our analysis is preliminary, it suggests that early neurodevelopmental pathologies might contribute to bipolar disorder, possibly through genetic mechanisms. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  3. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  4. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan

    2013-01-01

    The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results...

  5. Undiscovered resource evaluation: Towards applying a systematic approach to uranium

    International Nuclear Information System (INIS)

    Fairclough, M.; Katona, L.

    2014-01-01

    Evaluations of potential mineral resource supply range from spatial to aspatial, and everything in between across a range of scales. They also range from qualitative to quantitative with similar hybrid examples across the spectrum. These can compromise detailed deposit-specific reserve and resource calculations, target generative processes and estimates of potential endowments in a broad geographic or geological area. All are estimates until the ore has been discovered and extracted. Contemporary national or provincial scale evaluations of mineral potential are relatively advanced and some include uranium, such as those for South Australia undertaken by the State Geological Survey. These play an important role in land-use planning as well as attracting exploration investment and range from datato knowledge-driven approaches. Studies have been undertaken for the Mt Painter region, as well as for adjacent basins. The process of estimating large-scale potential mineral endowments is critical for national and international planning purposes but is a relatively recent and less common undertaking. In many cases, except at a general level, the data and knowledge for a relatively immature terrain is lacking, requiring assessment by analogy with other areas. Commencing in the 1980s, the United States Geological Survey, and subsequently the Geological Survey of Canada evaluated a range of commodities ranging from copper to hydrocarbons with a view to security of supply. They developed innovative approaches to, as far as practical, reduce the uncertainty and maximise the reproducibility of the calculations in information-poor regions. Yet the approach to uranium was relatively ad hoc and incomplete (such as the US Department of Energy NURE project). Other historic attempts, such as the IAEA-NEA International Uranium Resource Evaluation Project (IUREP) in the 1970s, were mainly qualitative. While there is still no systematic global evaluation of undiscovered uranium resources

  6. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  7. 7. Professor of Art Paradigms: Theoretical and Applied Approaches

    Directory of Open Access Journals (Sweden)

    Bularga Tatiana

    2017-03-01

    Full Text Available The present article describes purposes, learning content and requirements of an educational academic and postgraduate (internships for teachers process, focused on teacher training in respect of the most subtle and valuable framework for education, the achievement of the individual potential of each pupil, qualified as a unique personality. Therefore, it is proposed a synthesis on the formative program geared towards the assimilation of the future and current teachers of artistic disciplines (music, choreography, painting of the action and behavioral models appropriate to the domain, to the effectively organization of individualized educational process.

  8. Introduction to semiconductor lasers for optical communications an applied approach

    CERN Document Server

    Klotzkin, David J

    2014-01-01

    This textbook provides a thorough and accessible treatment of semiconductor lasers from a design and engineering perspective. It includes both the physics of devices as well as the engineering, designing, and testing of practical lasers. The material is presented clearly with many examples provided. Readers of the book will come to understand the finer aspects of the theory, design, fabrication, and test of these devices and have an excellent background for further study of optoelectronics. This book also: ·         Provides a multi-faceted approach to explaining the theories behind semiconductor lasers, utilizing mathematical examples, illustrations, and written theoretical presentations ·         Offers a balance of relevant optoelectronic topics, with specific attention given to distributed feedback lasers, growth techniques, and waveguide cavity design ·         Provides a summary of every chapter, worked examples, and problems for readers to solve ·         Empasizes...

  9. Quantum particle swarm approaches applied to combinatorial problems

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos S.; Schirru, Roberto; Lima, Alan M.M. de

    2017-01-01

    Quantum Particle Swarm Optimization (QPSO) is a global convergence algorithm that combines the classical PSO philosophy and quantum mechanics to improve performance of PSO. Different from PSO it only has the 'measurement' of the position equation for all particles. The process of 'measurement' in quantum mechanics, obey classic laws while the particle itself follows the quantum rules. QPSO works like PSO in search ability but has fewer parameters control. In order to improve the QPSO performance, some strategies have been proposed in the literature. Weighted QPSO (WQPSO) is a version of QPSO, where weight parameter is insert in the calculation of the balance between the global and local searching of the algorithm. It has been shown to perform well in finding the optimal solutions for many optimization problems. In this article random confinement was introduced in WQPSO. The WQPSO with random confinement was tested in two combinatorial problems. First, we execute the model on Travelling Salesman Problem (TSP) to find the parameters' values resulting in good solutions in general. Finally, the model was tested on Nuclear Reactor Reload Problem, and the performance was compared with QPSO standard. (author)

  10. Quantum particle swarm approaches applied to combinatorial problems

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos S.; Schirru, Roberto; Lima, Alan M.M. de, E-mail: andressa@lmp.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    Quantum Particle Swarm Optimization (QPSO) is a global convergence algorithm that combines the classical PSO philosophy and quantum mechanics to improve performance of PSO. Different from PSO it only has the 'measurement' of the position equation for all particles. The process of 'measurement' in quantum mechanics, obey classic laws while the particle itself follows the quantum rules. QPSO works like PSO in search ability but has fewer parameters control. In order to improve the QPSO performance, some strategies have been proposed in the literature. Weighted QPSO (WQPSO) is a version of QPSO, where weight parameter is insert in the calculation of the balance between the global and local searching of the algorithm. It has been shown to perform well in finding the optimal solutions for many optimization problems. In this article random confinement was introduced in WQPSO. The WQPSO with random confinement was tested in two combinatorial problems. First, we execute the model on Travelling Salesman Problem (TSP) to find the parameters' values resulting in good solutions in general. Finally, the model was tested on Nuclear Reactor Reload Problem, and the performance was compared with QPSO standard. (author)

  11. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  12. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguia

    Full Text Available In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  13. The effective action approach applied to nuclear matter (1)

    International Nuclear Information System (INIS)

    Tran Huu Phat; Nguyen Tuan Anh.

    1996-11-01

    Within the framework of the Walecka model (QHD-I) the application of the Cornwall-Jackiw-Tomboulis (CJT) effective action to nuclear matter is presented. The main feature is the treating of the meson condensates for the system of finite nuclear density. The system of couple Schwinger-Dyson (SD) equations is derived. It is shown that SD equations for sigma-omega mixings are absent in this formalism. Instead, the energy density of the nuclear ground state does explicitly contain the contributions from the ring diagrams, amongst others. In the bare-vertex approximation, the expression for energy density is written down for numerical computation in the next paper. (author). 14 refs, 3 figs

  14. Applying the health action process approach (HAPA) to the choice of health products: An exploratory study

    DEFF Research Database (Denmark)

    Krutulyte, Rasa; Grunert, Klaus G.; Scholderer, Joachim

    on the role of the behavioural intention predictors such as risk perception, outcome expectations and self-efficacy. The model has been proved to be a useful framework for understanding consumer choosing health food and is substantial in the further application of dietary choice issues.......This paper presents the results of a qualitative pilot study that aimed to uncovering Danish consumers' motives for choosing health food. Schwarzer's (1992) health action process approach (HAPA) was applied to understand the process by which people chose health products. The research focused...

  15. Semantic Approaches Applied to Scientific Ocean Drilling Data

    Science.gov (United States)

    Fils, D.; Jenkins, C. J.; Arko, R. A.

    2012-12-01

    The application of Linked Open Data methods to 40 years of data from scientific ocean drilling is providing users with several new methods for rich-content data search and discovery. Data from the Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP) and Integrated Ocean Drilling Program (IODP) have been translated and placed in RDF triple stores to provide access via SPARQL, linked open data patterns, and by embedded structured data through schema.org / RDFa. Existing search services have been re-encoded in this environment which allows the new and established architectures to be contrasted. Vocabularies including computed semantic relations between concepts, allow separate but related data sets to be connected on their concepts and resources even when they are expressed somewhat differently. Scientific ocean drilling produces a wide range of data types and data sets: borehole logging file-based data, images, measurements, visual observations and the physical sample data. The steps involved in connecting these data to concepts using vocabularies will be presented, including the connection of data sets through Vocabulary of Interlinked Datasets (VoID) and open entity collections such as Freebase and dbPedia. Demonstrated examples will include: (i) using RDF Schema for inferencing and in federated searches across NGDC and IODP data, (ii) using structured data in the data.oceandrilling.org web site, (iii) association through semantic methods of age models and depth recorded data to facilitate age based searches for data recorded by depth only.

  16. An extended gravity model with substitution applied to international trade

    NARCIS (Netherlands)

    Bikker, J.A.

    The traditional gravity model has been applied many times to international trade flows, especially in order to analyze trade creation and trade diversion. However, there are two fundamental objections to the model: it cannot describe substitutions between flows and it lacks a cogent theoretical

  17. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  18. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  19. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  20. Applying a Conceptual Model in Sport Sector Work- Integrated Learning Contexts

    Science.gov (United States)

    Agnew, Deborah; Pill, Shane; Orrell, Janice

    2017-01-01

    This paper applies a conceptual model for work-integrated learning (WIL) in a multidisciplinary sports degree program. Two examples of WIL in sport will be used to illustrate how the conceptual WIL model is being operationalized. The implications for practice are that curriculum design must recognize a highly flexible approach to the nature of…

  1. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    Science.gov (United States)

    Vlah, Zvonimir; Seljak, Uroš; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias

    2012-11-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.

  2. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  3. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  4. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  5. An Equilibrium Assembly Model Applied to Murine Polyomavirus

    OpenAIRE

    Keef, T.

    2005-01-01

    In Keef et al., Assembly Models for Papovaviridae based on Tiling Theory (submitted to J. Phys. Biol.), 2005 [1] we extended an equilibrium assembly model to the (pseudo–)T = 7 viral capsids in the family of Papovaviridae providing assembly pathways for the most likely or primary intermediates and computing their concentrations. Here this model is applied to Murine Polyomavirus based on the association energies provided by the VIPER web page Reddy et al.“Virus particle explorer (VIPER), a web...

  6. Analytic model of Applied-B ion diode impedance behavior

    International Nuclear Information System (INIS)

    Miller, P.A.; Mendel, C.W. Jr.

    1987-01-01

    An empirical analysis of impedance data from Applied-B ion diodes used in seven inertial confinement fusion research experiments was published recently. The diodes all operated with impedance values well below the Child's-law value. The analysis uncovered an unusual unifying relationship among data from the different experiments. The analysis suggested that closure of the anode-cathode gap by electrode plasma was not a dominant factor in the experiments, but was not able to elaborate the underlying physics. Here we present a new analytic model of Applied-B ion diodes coupled to accelerators. A critical feature of the diode model is based on magnetic insulation theory. The model successfully describes impedance behavior of these diodes and supports stimulating new viewpoints of the physics of Applied-B ion diode operation

  7. The workshop on ecosystems modelling approaches for South ...

    African Journals Online (AJOL)

    roles played by models in the OMP approach, and raises questions about the costs of the data collection. (in particular) needed to apply a multispecies modelling approach in South African fisheries management. It then summarizes the deliberations of workshops held by the Scientific Committees of two international ma-.

  8. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  9. Applying different quality and safety models in healthcare improvement work: Boundary objects and system thinking

    International Nuclear Information System (INIS)

    Wiig, Siri; Robert, Glenn; Anderson, Janet E.; Pietikainen, Elina; Reiman, Teemu; Macchi, Luigi; Aase, Karina

    2014-01-01

    A number of theoretical models can be applied to help guide quality improvement and patient safety interventions in hospitals. However there are often significant differences between such models and, therefore, their potential contribution when applied in diverse contexts. The aim of this paper is to explore how two such models have been applied by hospitals to improve quality and safety. We describe and compare the models: (1) The Organizing for Quality (OQ) model, and (2) the Design for Integrated Safety Culture (DISC) model. We analyze the theoretical foundations of the models, and show, by using a retrospective comparative case study approach from two European hospitals, how these models have been applied to improve quality and safety. The analysis shows that differences appear in the theoretical foundations, practical approaches and applications of the models. Nevertheless, the case studies indicate that the choice between the OQ and DISC models is of less importance for guiding the practice of quality and safety improvement work, as they are both systemic and share some important characteristics. The main contribution of the models lay in their role as boundary objects directing attention towards organizational and systems thinking, culture, and collaboration

  10. Applying an Integrated Approach to Vehicle and Crew Scheduling in Practice

    NARCIS (Netherlands)

    R. Freling (Richard); D. Huisman (Dennis); A.P.M. Wagelmans (Albert)

    2000-01-01

    textabstractThis paper deals with a practical application of an integrated approach to vehicle and crew scheduling, that we have developed previously. Computational results have shown that our approach can be applied to problems of practical size. However, application of the approach to the actual

  11. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  12. Remarks on orthotropic elastic models applied to wood

    Directory of Open Access Journals (Sweden)

    Nilson Tadeu Mascia

    2006-09-01

    Full Text Available Wood is generally considered an anisotropic material. In terms of engineering elastic models, wood is usually treated as an orthotropic material. This paper presents an analysis of two principal anisotropic elastic models that are usually applied to wood. The first one, the linear orthotropic model, where the material axes L (Longitudinal, R( radial and T(tangential are coincident with the Cartesian axes (x, y, z, is more accepted as wood elastic model. The other one, the cylindrical orthotropic model is more adequate of the growth caracteristics of wood but more mathematically complex to be adopted in practical terms. Specifically due to its importance in wood elastic parameters, this paper deals with the fiber orientation influence in these models through adequate transformation of coordinates. As a final result, some examples of the linear model, which show the variation of elastic moduli, i.e., Young´s modulus and shear modulus, with fiber orientation are presented.

  13. Applying the Flipped Classroom Model to English Language Arts Education

    Science.gov (United States)

    Young, Carl A., Ed.; Moran, Clarice M., Ed.

    2017-01-01

    The flipped classroom method, particularly when used with digital video, has recently attracted many supporters within the education field. Now more than ever, language arts educators can benefit tremendously from incorporating flipped classroom techniques into their curriculum. "Applying the Flipped Classroom Model to English Language Arts…

  14. The limitations of applying rational decision-making models to ...

    African Journals Online (AJOL)

    The aim of this paper is to show the limitations of rational decision-making models as applied to child spacing and more specifically to the use of modern methods of contraception. In the light of factors known to influence low uptake of child spacing services in other African countries, suggestions are made to explain the ...

  15. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  16. Knowledge Growth: Applied Models of General and Individual Knowledge Evolution

    Science.gov (United States)

    Silkina, Galina Iu.; Bakanova, Svetlana A.

    2016-01-01

    The article considers the mathematical models of the growth and accumulation of scientific and applied knowledge since it is seen as the main potential and key competence of modern companies. The problem is examined on two levels--the growth and evolution of objective knowledge and knowledge evolution of a particular individual. Both processes are…

  17. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  18. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  19. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  20. An applied model for the evaluation of multiple physiological stressors.

    Science.gov (United States)

    Constable, S H; Sherry, C J; Walters, T J

    1991-01-01

    In everyday life, a human is likely to be exposed to the combined effects of a number of different stressors simultaneously. Consequently, if an applied model is to ultimately provide the best 'fit' between the modeling and modeled phenomena, it must be able to accommodate the evaluation of multiple stressors. Therefore, a multidimensional, primate model is described that can fully accommodate a large number of conceivably stressful, real life scenarios that may be encountered by civilian or military workers. A number of physiological measurements were made in female rhesus monkeys in order to validate the model against previous reports. These evaluations were further expanded to include the experimental perturbation of physical work (exercise). Physiological profiles during activity were extended with the incorporation of radio telemetry. In conclusion, this model allows maximal extrapolation of the potential deleterious or ergogenic effects on systemic physiological function under conditions of realistic operational demands and environments.

  1. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  2. HEDR modeling approach: Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.

  3. Applying a realistic evaluation model to occupational safety interventions

    DEFF Research Database (Denmark)

    Pedersen, Louise Møller

    2018-01-01

    Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal characte......Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal...... characteristics of key actors (defined mechanisms), and the interplay between them, and can be categorized as expected or unexpected. However, little is known about ’how’ to include context and mechanisms in evaluations of intervention effectiveness. A revised realistic evaluation model has been introduced...... and qualitative methods. This revised model has, however, not been applied in a real life context. Method: The model is applied in a controlled, four-component, integrated behaviour-based and safety culture-based safety intervention study (2008-2010) in a medium-sized wood manufacturing company. The interventions...

  4. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  5. An equilibrium approach to modelling social interaction

    Science.gov (United States)

    Gallo, Ignacio

    2009-07-01

    The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi-population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution of the model is provided in the thermodynamical limit by finding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach.

  6. Agrochemical fate models applied in agricultural areas from Colombia

    Science.gov (United States)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  7. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    of it application on a social media maturity data-set. Specifically, we employ Necessary Condition Analysis (NCA) to identify maturity stage boundaries as necessary conditions and Qualitative Comparative Analysis (QCA) to arrive at multiple configurations that can be equally effective in progressing to higher......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...... characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration...

  8. Applying Buddhist Practices to Advocacy: The Advocacy-Serving Model

    Science.gov (United States)

    Warren, Jane; Klepper, Konja K.; Lambert, Serena; Nunez, Johnna; Williams, Susan

    2011-01-01

    Creating and retaining empathic connections with the most disenfranchised among us can take a toll on the wellness of counselor advocates. The Advocacy-Serving Model is introduced as a creative approach to strengthening the ability of advocates to serve through enhancing awareness, focusing actions, and connecting to community. The model…

  9. Modeling Approaches in Planetary Seismology

    Science.gov (United States)

    Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick

    2014-01-01

    Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.

  10. Studying, Teaching and Applying Sustainability Visions Using Systems Modeling

    Directory of Open Access Journals (Sweden)

    David M. Iwaniec

    2014-07-01

    Full Text Available The objective of articulating sustainability visions through modeling is to enhance the outcomes and process of visioning in order to successfully move the system toward a desired state. Models emphasize approaches to develop visions that are viable and resilient and are crafted to adhere to sustainability principles. This approach is largely assembled from visioning processes (resulting in descriptions of desirable future states generated from stakeholder values and preferences and participatory modeling processes (resulting in systems-based representations of future states co-produced by experts and stakeholders. Vision modeling is distinct from normative scenarios and backcasting processes in that the structure and function of the future desirable state is explicitly articulated as a systems model. Crafting, representing and evaluating the future desirable state as a systems model in participatory settings is intended to support compliance with sustainability visioning quality criteria (visionary, sustainable, systemic, coherent, plausible, tangible, relevant, nuanced, motivational and shared in order to develop rigorous and operationalizable visions. We provide two empirical examples to demonstrate the incorporation of vision modeling in research practice and education settings. In both settings, vision modeling was used to develop, represent, simulate and evaluate future desirable states. This allowed participants to better identify, explore and scrutinize sustainability solutions.

  11. The DPSIR approach applied to marine eutrophication in LCIA as a learning tool

    DEFF Research Database (Denmark)

    Cosme, Nuno Miguel Dias; Olsen, Stig Irving

    -economic secondary drivers. The nitrogen exported to marine coastal ecosystems (P), after point and nonpoint source emissions, promote changes in the environmental conditions (S) such as low dissolved oxygen levels that cause the (I) effects on biota. These, stimulate society into designing actions ® to modify D...... the State (S) of the ecosystem, causing the Impacts (I) on these, and contributing to the management strategies and Responses ®. The latter are designed to modify the drivers, minimise the pressures and restore the state of the receiving ecosystem. In our opinion the DPSIR provides a good conceptual...... understanding that is well suited for sustainability teaching and communication purposes. Life Cycle Impact Assessment (LCIA) indicators aim at modelling the P-S-I parts and provide a good background for understanding D and R. As an example, the DPSIR approach was applied to the LCIA indicator marine...

  12. Organic chemistry. A data-intensive approach to mechanistic elucidation applied to chiral anion catalysis.

    Science.gov (United States)

    Milo, Anat; Neel, Andrew J; Toste, F Dean; Sigman, Matthew S

    2015-02-13

    Knowledge of chemical reaction mechanisms can facilitate catalyst optimization, but extracting that knowledge from a complex system is often challenging. Here, we present a data-intensive method for deriving and then predictively applying a mechanistic model of an enantioselective organic reaction. As a validating case study, we selected an intramolecular dehydrogenative C-N coupling reaction, catalyzed by chiral phosphoric acid derivatives, in which catalyst-substrate association involves weak, noncovalent interactions. Little was previously understood regarding the structural origin of enantioselectivity in this system. Catalyst and substrate substituent effects were probed by means of systematic physical organic trend analysis. Plausible interactions between the substrate and catalyst that govern enantioselectivity were identified and supported experimentally, indicating that such an approach can afford an efficient means of leveraging mechanistic insight so as to optimize catalyst design. Copyright © 2015, American Association for the Advancement of Science.

  13. Flipped Classroom Adapted to the ARCS Model of Motivation and Applied to a Physics Course

    Science.gov (United States)

    Asiksoy, Gülsüm; Özdamli, Fezile

    2016-01-01

    This study aims to determine the effect on the achievement, motivation and self-sufficiency of students of the flipped classroom approach adapted to Keller's ARCS (Attention, Relevance, Confidence and Satisfaction) motivation model and applied to a physics course. The study involved 66 students divided into two classes of a physics course. The…

  14. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    Science.gov (United States)

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  15. Branding approach and valuation models

    Directory of Open Access Journals (Sweden)

    Mamula Tatjana

    2006-01-01

    Full Text Available Much of the skill of marketing and branding nowadays is concerned with building equity for products whose characteristics, pricing, distribution and availability are really quite close to each other. Brands allow the consumer to shop with confidence. The real power of successful brands is that they meet the expectations of those that buy them or, to put it another way, they represent a promise kept. As such they are a contract between a seller and a buyer: if the seller keeps to its side of the bargain, the buyer will be satisfied; if not, the buyer will in future look elsewhere. Understanding consumer perceptions and associations is an important first step to understanding brand preferences and choices. In this paper, we discuss different models to measure value of brand according to couple of well known approaches according to request by companies. We rely upon several empirical examples.

  16. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  17. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  18. Biomechanical abdominal wall model applied to hernia repair.

    Science.gov (United States)

    Lyons, M; Mohan, H; Winter, D C; Simms, C K

    2015-01-01

    Most surgical innovations require extensive preclinical testing before employment in the operative environment. There is currently no way to develop and test innovations for abdominal wall surgery that is cheap, repeatable and easy to use. In hernia repair, the required mesh overlap relative to defect size is not established. The aims of this study were to develop a biomechanical model of the abdominal wall based on in vivo pressure measurements, and to apply this to study mesh overlap in hernia repair. An observational study of intra-abdominal pressure (IAP) levels throughout abdominal surgery was conducted to identify the peak perioperative IAP in vivo. This was then applied in the development of a surrogate abdominal wall model. An in vitro study of mesh overlap for various defect sizes was then conducted using this clinically relevant surrogate abdomen model. The mean peak perioperative IAP recorded in the clinical study was 1740 Pa, and occurred during awakening from anaesthesia. This was reproduced in the surrogate abdomen model, which was also able to replicate incisional hernia formation. Using this model, the mesh overlap necessary to prevent hernia formation up to 20 kPa was found, independent of anatomical variations, to be 2 × (defect diameter) + 25 mm. This study demonstrated that a surgically relevant surrogate abdominal wall model is a useful translational tool in the study of hernia repair. Surgical relevance This study examined the mesh overlap requirements for hernia repair, evaluated in a biomechanical model of the abdomen. Currently, mesh size is selected based on empirical evidence and may underpredict the requirement for large meshes. The study proposes a relationship between the defect size and mesh size to select the appropriate mesh size. Following further trials and investigations, this could be used in clinical practice to reduce the incidence of hernia recurrence. © 2015 BJS Society Ltd. Published by John Wiley & Sons Ltd.

  19. Estimation of plant diversity at landscape level: a methodological approach applied to three Spanish rural areas.

    Science.gov (United States)

    Ortega, M; Elena-Roselló, R; García del Barrio, J M

    2004-07-01

    Approaches linking biodiversity assessment with landscape structure are necessary in the framework of sustainable rural development. The present paper describes a methodology to estimate plant diversity involving landscape structure as a proportional weight associated with different plant communities found in the landscape mosaic. The area occupied by a plant community, its patch number or its spatial distribution of patches are variables that could be expressed in gamma plant diversity of a territory. The methodology applies (1) remote sensing information, to identify land cover and land use types; (2) aspect, to discriminate composition of plant communities in each land cover type; (3) multi-scale field techniques, to asses plant diversity; (4) affinity analysis of plant community composition, to validate the stratified random sampling design and (5) the additive model that partitions gamma diversity into its alpha and beta components. The method was applied to three Spanish rural areas and was able to record 150-260 species per ha. Species richness, Shannon information index and Simpson concentration index were used to measure diversity in each area. The estimation using Shannon diversity index and the product of patch number and patch interspersion as weighting of plant community diversity was found to be the most appropriate method of measuring plant diversity at the landscape level.

  20. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    Science.gov (United States)

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  1. Improving Credit Scorecard Modeling Through Applying Text Analysis

    OpenAIRE

    Omar Ghailan; Hoda M.O. Mokhtar; Osman Hegazy

    2016-01-01

    In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model ...

  2. Remote sensing applied to numerical modelling. [water resources pollution

    Science.gov (United States)

    Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.

    1975-01-01

    Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.

  3. Liquid-drop model applied to heavy ions irradiation

    International Nuclear Information System (INIS)

    De Cicco, Hernan; Alurralde, Martin A.; Saint-Martin, Maria L. G.; Bernaola, Omar A.

    1999-01-01

    Liquid-drop model is used, previously applied in the study of radiation damage in metals, in an energy range not covered by molecular dynamics, in order to understand experimental data of particle tracks in an organic material (Makrofol E), which cannot be accurately described by the existing theoretical methods. The nuclear and electronic energy depositions are considered for each ion considered and the evolution of the thermal explosion is evaluated. The experimental observation of particle tracks in a region previously considered as 'prohibited' are justified. Although the model used has free parameters and some discrepancies with the experimental diametrical values exist, the agreement obtained is highly superior than that of other existing models. (author)

  4. Datamining approaches for modeling tumor control probability.

    Science.gov (United States)

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  5. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  6. Inverse geothermal modelling applied to Danish sedimentary basins

    Science.gov (United States)

    Poulsen, Søren E.; Balling, Niels; Bording, Thue S.; Mathiesen, Anders; Nielsen, Søren B.

    2017-10-01

    This paper presents a numerical procedure for predicting subsurface temperatures and heat-flow distribution in 3-D using inverse calibration methodology. The procedure is based on a modified version of the groundwater code MODFLOW by taking advantage of the mathematical similarity between confined groundwater flow (Darcy's law) and heat conduction (Fourier's law). Thermal conductivity, heat production and exponential porosity-depth relations are specified separately for the individual geological units of the model domain. The steady-state temperature model includes a model-based transient correction for the long-term palaeoclimatic thermal disturbance of the subsurface temperature regime. Variable model parameters are estimated by inversion of measured borehole temperatures with uncertainties reflecting their quality. The procedure facilitates uncertainty estimation for temperature predictions. The modelling procedure is applied to Danish onshore areas containing deep sedimentary basins. A 3-D voxel-based model, with 14 lithological units from surface to 5000 m depth, was built from digital geological maps derived from combined analyses of reflection seismic lines and borehole information. Matrix thermal conductivity of model lithologies was estimated by inversion of all available deep borehole temperature data and applied together with prescribed background heat flow to derive the 3-D subsurface temperature distribution. Modelled temperatures are found to agree very well with observations. The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature gradients to depths of 2000-3000 m are generally around 25-30 °C km-1, locally up to about 35 °C km-1. Large regions have geothermal reservoirs with characteristic temperatures

  7. Active lubrication applied to radial gas journal bearings. Part 2: Modelling improvement and experimental validation

    DEFF Research Database (Denmark)

    Pierart, Fabián G.; Santos, Ilmar F.

    2016-01-01

    Actively-controlled lubrication techniques are applied to radial gas bearings aiming at enhancing one of their most critical drawbacks, their lack of damping. A model-based control design approach is presented using simple feedback control laws, i.e. proportional controllers. The design approach...... by finite element method and the global model is used as control design tool. Active lubrication allows for significant increase in damping factor of the rotor-bearing system. Very good agreement between theory and experiment is obtained, supporting the multi-physic design tool developed....

  8. Applied Bounded Model Checking for Interlocking System Designs

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan; Pinger, Ralf

    2013-01-01

    In this article the verification and validation of interlocking systems is investigated. Reviewing both geographical and route-related interlocking, the verification objectives can be structured from a perspective of computer science into (1) verification of static semantics, and (2) verification...... of behavioural (operational) semantics. The former checks that the plant model – that is, the software components reflecting the physical components of the interlocking system – has been set up in an adequate way. The latter investigates trains moving through the network, with the objective to uncover potential...... safety violations. From a formal methods perspective, these verification objectives can be approached by theorem proving, global, or bounded model checking. This article explains the techniques for application of bounded model checking techniques, and discusses their advantages in comparison...

  9. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, T. B.; Ketzel, Matthias; Skov, H.

    2016-01-01

    Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...... of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to successfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach...

  10. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    Science.gov (United States)

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Applying the reasoned action approach to understanding health protection and health risk behaviors.

    Science.gov (United States)

    Conner, Mark; McEachan, Rosemary; Lawton, Rebecca; Gardner, Peter

    2017-12-01

    The Reasoned Action Approach (RAA) developed out of the Theory of Reasoned Action and Theory of Planned Behavior but has not yet been widely applied to understanding health behaviors. The present research employed the RAA in a prospective design to test predictions of intention and action for groups of protection and risk behaviors separately in the same sample. To test the RAA for health protection and risk behaviors. Measures of RAA components plus past behavior were taken in relation to eight protection and six risk behaviors in 385 adults. Self-reported behavior was assessed one month later. Multi-level modelling showed instrumental attitude, experiential attitude, descriptive norms, capacity and past behavior were significant positive predictors of intentions to engage in protection or risk behaviors. Injunctive norms were only significant predictors of intention in protection behaviors. Autonomy was a significant positive predictor of intentions in protection behaviors and a negative predictor in risk behaviors (the latter relationship became non-significant when controlling for past behavior). Multi-level modelling showed that intention, capacity, and past behavior were significant positive predictors of action for both protection and risk behaviors. Experiential attitude and descriptive norm were additional significant positive predictors of risk behaviors. The RAA has utility in predicting both protection and risk health behaviors although the power of predictors may vary across these types of health behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Fuzzy uncertainty modeling applied to AP1000 nuclear power plant LOCA

    International Nuclear Information System (INIS)

    Ferreira Guimaraes, Antonio Cesar; Franklin Lapa, Celso Marcelo; Lamego Simoes Filho, Francisco Fernando; Cabral, Denise Cunha

    2011-01-01

    Research highlights: → This article presents an uncertainty modelling study using a fuzzy approach. → The AP1000 Westinghouse NPP was used and it is provided of passive safety systems. → The use of advanced passive safety systems in NPP has limited operational experience. → Failure rates and basic events probabilities used on the fault tree analysis. → Fuzzy uncertainty approach was employed to reliability of the AP1000 large LOCA. - Abstract: This article presents an uncertainty modeling study using a fuzzy approach applied to the Westinghouse advanced nuclear reactor. The AP1000 Westinghouse Nuclear Power Plant (NPP) is provided of passive safety systems, based on thermo physics phenomenon, that require no operating actions, soon after an incident has been detected. The use of advanced passive safety systems in NPP has limited operational experience. As it occurs in any reliability study, statistically non-significant events report introduces a significant uncertainty level about the failure rates and basic events probabilities used on the fault tree analysis (FTA). In order to model this uncertainty, a fuzzy approach was employed to reliability analysis of the AP1000 large break Loss of Coolant Accident (LOCA). The final results have revealed that the proposed approach may be successfully applied to modeling of uncertainties in safety studies.

  13. Enhanced pid vs model predictive control applied to bldc motor

    Science.gov (United States)

    Gaya, M. S.; Muhammad, Auwal; Aliyu Abdulkadir, Rabiu; Salim, S. N. S.; Madugu, I. S.; Tijjani, Aminu; Aminu Yusuf, Lukman; Dauda Umar, Ibrahim; Khairi, M. T. M.

    2018-01-01

    BrushLess Direct Current (BLDC) motor is a multivariable and highly complex nonlinear system. Variation of internal parameter values with environment or reference signal increases the difficulty in controlling the BLDC effectively. Advanced control strategies (like model predictive control) often have to be integrated to satisfy the control desires. Enhancing or proper tuning of a conventional algorithm results in achieving the desired performance. This paper presents a performance comparison of Enhanced PID and Model Predictive Control (MPC) applied to brushless direct current motor. The simulation results demonstrated that the PSO-PID is slightly better than the PID and MPC in tracking the trajectory of the reference signal. The proposed scheme could be useful algorithms for the system.

  14. Synthetic modelling of acoustical propagation applied to seismic oceanography experiments

    Science.gov (United States)

    Kormann, Jean; Cobo, Pedro; Biescas, Berta; Sallarés, Valentí; Papenberg, Cord; Recuero, Manuel; Carbonell, Ramón

    2010-03-01

    Recent work shows that multichannel seismic (MCS) systems provide detailed information on the oceans' finestructure. The aim of this paper is to analyze if high order numerical algorithms are suitable to accurately model the extremely weak wavefield scattered by the oceans' finestructures. For this purpose, we generate synthetic shot records along a coincident seismic and oceanographic profile acquired across a Mediterranean salt lens in the Gulf of Cadiz. We apply a 2D finite-difference time-domain propagation model, together with second-order Complex Frequency Shifted Perfectly Matched Layers at the numerical boundaries, using as reference a realistic sound speed map with the lateral resolution of the seismic data. We show that our numerical propagator creates an acoustical image of the ocean finestructures including the salt lens that reproduces with outstanding detail the real acquired one.

  15. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  16. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  17. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598.

  18. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598

  19. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  20. A nationwide modelling approach to decommissioning - 16182

    International Nuclear Information System (INIS)

    Kelly, Bernard; Lowe, Andy; Mort, Paul

    2009-01-01

    In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)

  1. Multimodal Approach for Automatic Emotion Recognition Applied to the Tension Levels Study in TV Newscasts

    Directory of Open Access Journals (Sweden)

    Moisés Henrique Ramos Pereira

    2015-12-01

    Full Text Available This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance with the distribution of audiovisual indicators extracted over a TV newscast, demonstrating the potential of the approach to support the TV journalistic discourse analysis.This article addresses a multimodal approach to automatic emotion recognition in participants of TV newscasts (presenters, reporters, commentators and others able to assist the tension levels study in narratives of events in this television genre. The methodology applies state-of-the-art computational methods to process and analyze facial expressions, as well as speech signals. The proposed approach contributes to semiodiscoursive study of TV newscasts and their enunciative praxis, assisting, for example, the identification of the communication strategy of these programs. To evaluate the effectiveness of the proposed approach was applied it in a video related to a report displayed on a Brazilian TV newscast great popularity in the state of Minas Gerais. The experimental results are promising on the recognition of emotions on the facial expressions of tele journalists and are in accordance

  2. Applying a Methodological Approach to the Development of a Natural Interaction System

    Science.gov (United States)

    Del Valle-Agudo, David; Rivero-Espinosa, Jessica; Calle-Gómez, Francisco Javier; Cuadra-Fernández, Dolores

    This work describes the methodology used to design a Natural Interaction System for guiding services. A national research project was the framework where the approach was applied. The aim of that system is interacting with clients of a hotel for providing diverse services. Apart from the description of the methodology, a case study is added to the paper in order to outline strengths of the approach, and limits that should lead to future research.

  3. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    OpenAIRE

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing pla...

  4. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  5. Consideration of an applied model of public health program infrastructure.

    Science.gov (United States)

    Lavinghouze, René; Snyder, Kimberly; Rieker, Patricia; Ottoson, Judith

    2013-01-01

    Systemic infrastructure is key to public health achievements. Individual public health program infrastructure feeds into this larger system. Although program infrastructure is rarely defined, it needs to be operationalized for effective implementation and evaluation. The Ecological Model of Infrastructure (EMI) is one approach to defining program infrastructure. The EMI consists of 5 core (Leadership, Partnerships, State Plans, Engaged Data, and Managed Resources) and 2 supporting (Strategic Understanding and Tactical Action) elements that are enveloped in a program's context. We conducted a literature search across public health programs to determine support for the EMI. Four of the core elements were consistently addressed, and the other EMI elements were intermittently addressed. The EMI provides an initial and partial model for understanding program infrastructure, but additional work is needed to identify evidence-based indicators of infrastructure elements that can be used to measure success and link infrastructure to public health outcomes, capacity, and sustainability.

  6. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  7. An effective model for ergonomic optimization applied to a new automotive assembly line

    Energy Technology Data Exchange (ETDEWEB)

    Duraccio, Vincenzo [University Niccolò Cusano, Rome Via Don Gnocchi,00166, Roma Italy (Italy); Elia, Valerio [Dept. of Innovation Engineering - University of Salento Via Monteroni, 73100, Lecce (Italy); Forcina, Antonio [University Parthenope, Dep. of Engineering Centro Direzionale - Isola C4 80143 - Naples - Italy (Italy)

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  8. An effective model for ergonomic optimization applied to a new automotive assembly line

    International Nuclear Information System (INIS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-01-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  9. An effective model for ergonomic optimization applied to a new automotive assembly line

    Science.gov (United States)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  10. Applying Atmospheric Measurements to Constrain Parameters of Terrestrial Source Models

    Science.gov (United States)

    Hyer, E. J.; Kasischke, E. S.; Allen, D. J.

    2004-12-01

    Quantitative inversions of atmospheric measurements have been widely applied to constrain atmospheric budgets of a range of trace gases. Experiments of this type have revealed persistent discrepancies between 'bottom-up' and 'top-down' estimates of source magnitudes. The most common atmospheric inversion uses the absolute magnitude as the sole parameter for each source, and returns the optimal value of that parameter. In order for atmospheric measurements to be useful for improving 'bottom-up' models of terrestrial sources, information about other properties of the sources must be extracted. As the density and quality of atmospheric trace gas measurements improve, examination of higher-order properties of trace gas sources should become possible. Our model of boreal forest fire emissions is parameterized to permit flexible examination of the key uncertainties in this source. Using output from this model together with the UM CTM, we examined the sensitivity of CO concentration measurements made by the MOPITT instrument to various uncertainties in the boreal source: geographic distribution of burned area, fire type (crown fires vs. surface fires), and fuel consumption in above-ground and ground-layer fuels. Our results indicate that carefully designed inversion experiments have the potential to help constrain not only the absolute magnitudes of terrestrial sources, but also the key uncertainties associated with 'bottom-up' estimates of those sources.

  11. Linear model applied to the evaluation of pharmaceutical stability data

    Directory of Open Access Journals (Sweden)

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  12. Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

    Science.gov (United States)

    Lin, Su-ching; Wu, Ming-sui

    2016-01-01

    This study was the first year of a two-year project which applied a program theory-driven approach to evaluating the impact of teachers' professional development interventions on students' learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of…

  13. A Geometric Approach to Diagnosis Applied to A Ship Propulsion Problem

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    A geometric approach to FDI diagnosis for input-affine nonlinear systems is briefly described and applied to a ship propulsion benchmark. The analysis method is used to examine the possibility of detecting and isolating predefined faults in the system. The considered faults cover sensor, actuator...

  14. PhysioSoft--an approach in applying computer technology in biofeedback procedures.

    Science.gov (United States)

    Havelka, Mladen; Havelka, Juraj; Delimar, Marko

    2009-09-01

    The paper presents description of original biofeedback computer program called PhysioSoft. It has been designed on the basis of the experience in development of biofeedback techniques of interdisciplinary team of experts of the Department of Health Psychology of the University of Applied Health Studies, Faculty of Electrical Engineering and Computing, University of Zagreb, and "Mens Sana", Private Biofeedback Practice in Zagreb. The interest in the possibility of producing direct and voluntary effects on autonomic body functions has gradually proportionately increased with the dynamics of abandoning the Cartesian model of body-mind relationship. The psychosomatic approach and studies carried out in the 50-ies of the 20th century, together with the research about conditioned and operant learning, have proved close inter-dependence between the physical and mental, and also the possibility of training the individual to consciously act on his autonomic physiological functions. The new knowledge has resulted in the development of biofeedback techniques around the 70-ies of the previous century and has been the base of many studies indicating the significance of biofeedback techniques in clinical practice concerned with many symptoms of health disorders. The digitalization of biofeedback instruments and development of user friendly computer software enable the use of biofeedback at individual level as an efficient procedure of a patient's active approach to self care of his own health. As the new user friendly computer software enables extensive accessibility of biofeedback instruments, the authors have designed the PhysioSoft computer program as a contribution to the development and broad use of biofeedback.

  15. Stakeholder Theory As an Ethical Approach to Effective Management: applying the theory to multiple contexts

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Harrison

    2015-09-01

    Full Text Available Objective – This article provides a brief overview of stakeholder theory, clears up some widely held misconceptions, explains the importance of examining stakeholder theory from a variety of international perspectives and how this type of research will advance management theory, and introduces the other articles in the special issue. Design/methodology/approach – Some of the foundational ideas of stakeholder theory are discussed, leading to arguments about the importance of the theory to management research, especially in an international context. Findings – Stakeholder theory is found to be a particularly useful perspective for addressing some of the important issues in business from an international perspective. It offers an opportunity to reinterpret a variety of concepts, models and phenomena across may different disciplines. Practical implications – The concepts explored in this article may be applied in many contexts, domestically and internationally, and across business disciplines as diverse as economics, public administration, finance, philosophy, marketing, law, and management. Originality/value – Research on stakeholder theory in an international context is both lacking and sorely needed. This article and the others in this special issue aim to help fill that void.

  16. Multidisciplinary Management: Model of Excellence in the Management Applied to Products and Services

    OpenAIRE

    Guerreiro , Evandro ,; Costa Neto , Pedro ,; Moreira Filho , Ulysses ,

    2014-01-01

    Part 1: Knowledge-Based Performance Improvement; International audience; The Multidisciplinary Management is the guiding vision of modern organizations and the systems thinking which requires new approaches to organizational excellence and quality management process. The objective of this article is to present a model for multidisciplinary management of quality applied to products and services based on American, Japanese, and Brazilian National Quality Awards. The methodology used to build th...

  17. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    Science.gov (United States)

    Nordstrom, D. Kirk

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  18. Blood transfusion determines postoperative morbidity in pediatric cardiac surgery applying a comprehensive blood-sparing approach.

    Science.gov (United States)

    Redlin, Matthias; Kukucka, Marian; Boettcher, Wolfgang; Schoenfeld, Helge; Huebler, Michael; Kuppe, Hermann; Habazettl, Helmut

    2013-09-01

    Recently we suggested a comprehensive blood-sparing approach in pediatric cardiac surgery that resulted in no transfusion in 71 infants (25%), postoperative transfusion only in 68 (24%), and intraoperative transfusion in 149 (52%). We analyzed the effects of transfusion on postoperative morbidity and mortality in the same cohort of patients. The effect of transfusion on the length of mechanical ventilation and intensive care unit stay was assessed using Kaplan-Meier curves. To assess whether transfusion independently determined the length of mechanical ventilation and length of intensive care unit stay, a multivariate model was applied. Additionally, in the subgroup of transfused infants, the effect of the applied volume of packed red blood cells was assessed. The median length of mechanical ventilation was 11 hours (interquartile range, 9-18 hours), 33 hours (interquartile range, 18-80 hours), and 93 hours (interquartile range, 34-161 hours) in the no transfusion, postoperative transfusion only, and intraoperative transfusion groups, respectively (P interquartile range, 1-2 days), 3.5 days (interquartile range, 2-5 days), and 8 days (interquartile range, 3-9 days; P < .00001). The multivariate hazard ratio for early extubation was 0.24 (95% confidence interval, 0.16-0.35) and 0.37 (95% confidence interval, 0.25-0.55) for the intraoperative transfusion and postoperative transfusion only groups, respectively (P < .00001). In addition, the cardiopulmonary time, body weight, need for reoperation, and hemoglobin during cardiopulmonary bypass affected the length of mechanical ventilation. Similar results were obtained for the length of intensive care unit stay. In the subgroup of transfused infants, the volume of packed red blood cells also independently affected both the length of mechanical ventilation and the length of intensive care unit stay. The incidence and volume of blood transfusion markedly affects postoperative morbidity in pediatric cardiac surgery. These

  19. Simulation of Road Traffic Applying Model-Driven Engineering

    Directory of Open Access Journals (Sweden)

    Alberto FERNÁNDEZ-ISABEL

    2016-05-01

    Full Text Available Road traffic is an important phenomenon in modern societies. The study of its different aspects in the multiple scenarios where it happens is relevant for a huge number of problems. At the same time, its scale and complexity make it hard to study. Traffic simulations can alleviate these difficulties, simplifying the scenarios to consider and controlling their variables. However, their development also presents difficulties. The main ones come from the need to integrate the way of working of researchers and developers from multiple fields. Model-Driven Engineering (MDE addresses these problems using Modelling Languages (MLs and semi-automatic transformations to organise and describe the development, from requirements to code. This paper presents a domain-specific MDE framework for simulations of road traffic. It comprises an extensible ML, support tools, and development guidelines. The ML adopts an agent-based approach, which is focused on the roles of individuals in road traffic and their decision-making. A case study shows the process to model a traffic theory with the ML, and how to specialise that specification for an existing target platform and its simulations. The results are the basis for comparison with related work.

  20. A theoretical intellectual capital model applied to cities

    Directory of Open Access Journals (Sweden)

    José Luis Alfaro Navarro

    2013-06-01

    Full Text Available New Management Information Systems (MIS are necessary at local level as the main source of wealth creation. Therefore, tools and approaches that provide a full future vision of any organization should be a strategic priority for economic development. In this line, cities are “centers of knowledge and sources of growth and innovation” and integrated urban development policies are necessary. These policies support communication networks and optimize location structures as strategies that provide opportunities for social and democratic participation for the citizens. This paper proposes a theoretical model to measure and evaluate the cities intellectual capital that allows determine what we must take into account to make cities a source of wealth, prosperity, welfare and future growth. Furthermore, local intellectual capital provides a long run vision. Thus, in this paper we develop and explain how to implement a model to estimate intellectual capital in cities. In this sense, our proposal is to provide a model for measuring and managing intellectual capital using socio-economic indicators for cities. These indicators offer a long term picture supported by a comprehensive strategy for those who occupy the local space, infrastructure for implementation and management of the environment for its development.

  1. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  2. Applying the health action process approach to bicycle helmet use and evaluating a social marketing campaign.

    Science.gov (United States)

    Karl, Florian M; Smith, Jennifer; Piedt, Shannon; Turcotte, Kate; Pike, Ian

    2017-08-05

    Bicycle injuries are of concern in Canada. Since helmet use was mandated in 1996 in the province of British Columbia, Canada, use has increased and head injuries have decreased. Despite the law, many cyclists do not wear a helmet. Health action process approach (HAPA) model explains intention and behaviour with self-efficacy, risk perception, outcome expectancies and planning constructs. The present study examines the impact of a social marketing campaign on HAPA constructs in the context of bicycle helmet use. A questionnaire was administered to identify factors determining helmet use. Intention to obey the law, and perceived risk of being caught if not obeying the law were included as additional constructs. Path analysis was used to extract the strongest influences on intention and behaviour. The social marketing campaign was evaluated through t-test comparisons after propensity score matching and generalised linear modelling (GLM) were applied to adjust for the same covariates. 400 cyclists aged 25-54 years completed the questionnaire. Self-efficacy and Intention were most predictive of intention to wear a helmet, which, moderated by planning, strongly predicted behaviour. Perceived risk and outcome expectancies had no significant impact on intention. GLM showed that exposure to the campaign was significantly associated with higher values in self-efficacy, intention and bicycle helmet use. Self-efficacy and planning are important points of action for promoting helmet use. Social marketing campaigns that remind people of appropriate preventive action have an impact on behaviour. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Lessons learned for applying a paired-catchment approach in drought analysis

    Science.gov (United States)

    Van Loon, Anne; Rangecroft, Sally; Coxon, Gemma; Agustín Breña Naranjo, José; Van Ogtrop, Floris; Croghan, Danny; Van Lanen, Henny

    2017-04-01

    Ongoing research is looking to quantify the human impact on hydrological drought using observed data. One potentially suitable method is the paired-catchment approach. Paired catchments have been successfully used for quantifying the impact of human actions (e.g. forest treatment and wildfires) on various components of a catchment's water balance. However, it is unclear whether this method could successfully be applied to drought. In this study, we used a paired-catchment approach to quantify the effects of reservoirs, groundwater abstraction and urbanisation on hydrological drought in the UK, Mexico, and Australia. Following recommendations in literature, we undertook a thorough catchment selection and identified catchments of similar size, climate, geology, and topography. One catchment of the pair was affected by either reservoirs, groundwater abstraction or urbanisation. For the selected catchment pairs, we standardised streamflow time series to catchment area, calculated a drought threshold from the natural catchment and applied it to the human-influenced catchment. The underlying assumption being that the differences in drought severity between catchments can then be attributed to the anthropogenic activity. In some catchments we had local knowledge about human influences, and therefore we could compare our paired-catchment results with hydrological model scenarios. However, we experienced that detailed data on human influences usually are not well recorded. The results showed us that it is important to account for variation in average annual precipitation between the paired catchments to be able to transfer the drought threshold of the natural catchment to the human-influenced catchment. This can be achieved by scaling the discharge by the difference in annual average precipitation. We also found that the temporal distribution of precipitation is important, because if meteorological droughts differ between the paired catchments, this may mask changes caused

  4. modeling, observation and control, a multi-model approach

    OpenAIRE

    Elkhalil, Mansoura

    2011-01-01

    This thesis is devoted to the control of systems which dynamics can be suitably described by a multimodel approach from an investigation study of a model reference adaptative control performance enhancement. Four multimodel control approaches have been proposed. The first approach is based on an output reference model control design. A successful experimental validation involving a chemical reactor has been carried out. The second approach is based on a suitable partial state model reference ...

  5. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Le Riche R.

    2010-06-01

    Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their

  6. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  7. Uncharted territory: A complex systems approach as an emerging paradigm in applied linguistics

    Directory of Open Access Journals (Sweden)

    Weideman, Albert J

    2009-12-01

    Full Text Available Developing a theory of applied linguistics is a top priority for the discipline today. The emergence of a new paradigm - a complex systems approach - in applied linguistics presents us with a unique opportunity to give prominence to the development of a foundational framework for this design discipline. Far from being a mere philosophical exercise, such a framework will find application in the training and induction of new entrants into the discipline within the developing context of South Africa, as well as internationally.

  8. Applying an integrated fuzzy gray MCDM approach: A case study on mineral processing plant site selection

    Directory of Open Access Journals (Sweden)

    Ezzeddin Bakhtavar

    2017-12-01

    Full Text Available The accurate selection of a processing plant site can result in decreasing total mining cost. This problem can be solved by multi-criteria decision-making (MCDM methods. This research introduces a new approach by integrating fuzzy AHP and gray MCDM methods to solve all decision-making problems. The approach is applied in the case of a copper mine area. The critical criteria are considered adjacency to the crusher, adjacency to tailing dam, adjacency to a power source, distance from blasting sources, the availability of sufficient land, and safety against floods. After studying the mine map, six feasible alternatives are prioritized using the integrated approach. Results indicated that sites A, B, and E take the first three ranks. The separate results of fuzzy AHP and gray MCDM confirm that alternatives A and B have the first two ranks. Moreover, the field investigations approved the results obtained by the approach.

  9. Applying the archetype approach to the database of a biobank information management system.

    Science.gov (United States)

    Späth, Melanie Bettina; Grimson, Jane

    2011-03-01

    The purpose of this study is to investigate the feasibility of applying the openEHR archetype approach to modelling the data in the database of an existing proprietary biobank information management system. A biobank information management system stores the clinical/phenotypic data of the sample donor and sample related information. The clinical/phenotypic data is potentially sourced from the donor's electronic health record (EHR). The study evaluates the reuse of openEHR archetypes that have been developed for the creation of an interoperable EHR in the context of biobanking, and proposes a new set of archetypes specifically for biobanks. The ultimate goal of the research is the development of an interoperable electronic biomedical research record (eBMRR) to support biomedical knowledge discovery. The database of the prostate cancer biobank of the Irish Prostate Cancer Research Consortium (PCRC), which supports the identification of novel biomarkers for prostate cancer, was taken as the basis for the modelling effort. First the database schema of the biobank was analyzed and reorganized into archetype-friendly concepts. Then, archetype repositories were searched for matching archetypes. Some existing archetypes were reused without change, some were modified or specialized, and new archetypes were developed where needed. The fields of the biobank database schema were then mapped to the elements in the archetypes. Finally, the archetypes were arranged into templates specifically to meet the requirements of the PCRC biobank. A set of 47 archetypes was found to cover all the concepts used in the biobank. Of these, 29 (62%) were reused without change, 6 were modified and/or extended, 1 was specialized, and 11 were newly defined. These archetypes were arranged into 8 templates specifically required for this biobank. A number of issues were encountered in this research. Some arose from the immaturity of the archetype approach, such as immature modelling support tools

  10. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  11. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...... methods suited for finite identifiability of particular types of deterministic actions....

  12. Applying Quality Function Deployment Model in Burn Unit Service Improvement.

    Science.gov (United States)

    Keshtkaran, Ali; Hashemi, Neda; Kharazmi, Erfan; Abbasi, Mehdi

    2016-01-01

    Quality function deployment (QFD) is one of the most effective quality design tools. This study applies QFD technique to improve the quality of the burn unit services in Ghotbedin Hospital in Shiraz, Iran. First, the patients' expectations of burn unit services and their priorities were determined through Delphi method. Thereafter, burn unit service specifications were determined through Delphi method. Further, the relationships between the patients' expectations and service specifications and also the relationships between service specifications were determined through an expert group's opinion. Last, the final importance scores of service specifications were calculated through simple additive weighting method. The findings show that burn unit patients have 40 expectations in six different areas. These expectations are in 16 priority levels. Burn units also have 45 service specifications in six different areas. There are four-level relationships between the patients' expectations and service specifications and four-level relationships between service specifications. The most important burn unit service specifications have been identified in this study. The QFD model developed in the study can be a general guideline for QFD planners and executives.

  13. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  14. Clutter Suppression by Means of Digital MTI as Applied to Precision Approach Radar

    Science.gov (United States)

    1974-12-01

    and Subtitle) CLUTTER SUPPRESSION BY MEANS OF DIGITAL MTI AS APPLIED TO PRESICION APPROACH RADAR 5. TYPE OF REPORT & PERIOD COVERED 6. PERFORMING...signal, but rather a number related to its amplitude and frequency. A separate digitized number of this type is provided for each range resolution...radar return signals are weighted and summed. This weighting operation corresponded to processing the signals through a non-recursive digital fitler

  15. Reynolds stress turbulence model applied to two-phase pressurized thermal shocks in nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Mérigoux, Nicolas, E-mail: nicolas.merigoux@edf.fr; Laviéville, Jérôme; Mimouni, Stéphane; Guingo, Mathieu; Baudry, Cyril

    2016-04-01

    Highlights: • NEPTUNE-CFD is used to model two-phase PTS. • k-ε model did produce some satisfactory results but also highlights some weaknesses. • A more advanced turbulence model has been developed, validated and applied for PTS. • Coupled with LIM, the first results confirmed the increased accuracy of the approach. - Abstract: Nuclear power plants are subjected to a variety of ageing mechanisms and, at the same time, exposed to potential pressurized thermal shock (PTS) – characterized by a rapid cooling of the internal Reactor Pressure Vessel (RPV) surface. In this context, NEPTUNE-CFD is used to model two-phase PTS and give an assessment on the structural integrity of the RPV. The first available choice was to use standard first order turbulence model (k-ε) to model high-Reynolds number flows encountered in Pressurized Water Reactor (PWR) primary circuits. In a first attempt, the use of k-ε model did produce some satisfactory results in terms of condensation rate and temperature field distribution on integral experiments, but also highlights some weaknesses in the way to model highly anisotropic turbulence. One way to improve the turbulence prediction – and consequently the temperature field distribution – is to opt for more advanced Reynolds Stress turbulence Model. After various verification and validation steps on separated effects cases – co-current air/steam-water stratified flows in rectangular channels, water jet impingements on water pool free surfaces – this Reynolds Stress turbulence Model (R{sub ij}-ε SSG) has been applied for the first time to thermal free surface flows under industrial conditions on COSI and TOPFLOW-PTS experiments. Coupled with the Large Interface Model, the first results confirmed the adequacy and increased accuracy of the approach in an industrial context.

  16. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  17. A strategy to apply a graded approach to a new research reactor I and C design

    International Nuclear Information System (INIS)

    Suh, Yong Suk; Park, Jae Kwan; Kim, Taek Kyu; Bae, Sang Hoon; Baang, Dane; Kim, Young Ki

    2012-01-01

    A project for the development of a new research reactor (NRR) was launched by KAERI in 2012. It has two purposes: 1) providing a facility for radioisotope production, neutron transmutation doping, and semiconductor wafer doping, and 2) obtaining a standard model for exporting a research reactor (RR). The instrumentation and control (I and C) design should reveal an appropriate architecture for the NRR export. The adoption of a graded approach (GA) was taken into account to design the I and C and architecture. Although the GA for RRs is currently under development by the IAEA, it has been recommended and applied in many areas of nuclear facilities. The Canadian Nuclear Safety Commission allows for the use of a GA for RRs to meet the safety requirements. Germany applied the GA to a decommissioning project. It categorized the level of complexity of the decommissioning project using the GA. In the case of 10 C.F.R. Part 830 830.7, a contractor must use a GA to implement the requirements of the part, document the basis of the GA used, and submit that document to U.S. DOE. It mentions that a challenge is the inconsistent application of GA on DOE programs. RG 1.176 states that graded quality assurance brings benefits of resource allocation based on the safety significance of the items. The U.S. NRC also applied the GA to decommissioning small facilities. The NASA published a handbook for risk informed decision making that is conducted using a GA. ISATR67.04.09 2005 supplements ANSI/ISA.S67.04.01. 2000 and ISA RP67.04.02 2000 in determining the setpoint using a GA. The GA is defined as a risk informed approach that, without compromising safety, allows safety requirements to be implemented in such a way that the level of design, analysis, and documentation are commensurate with the potential risks of the reactor. The IAEA is developing a GA through DS351 and has recommended applying it to a reactor design according to power and hazarding level. Owing to the wide range of RR

  18. International Conference on Applied Mathematics, Modeling and Computational Science & Annual meeting of the Canadian Applied and Industrial Mathematics

    CERN Document Server

    Bélair, Jacques; Kunze, Herb; Makarov, Roman; Melnik, Roderick; Spiteri, Raymond J

    2016-01-01

    Focusing on five main groups of interdisciplinary problems, this book covers a wide range of topics in mathematical modeling, computational science and applied mathematics. It presents a wealth of new results in the development of modeling theories and methods, advancing diverse areas of applications and promoting interdisciplinary interactions between mathematicians, scientists, engineers and representatives from other disciplines. The book offers a valuable source of methods, ideas, and tools developed for a variety of disciplines, including the natural and social sciences, medicine, engineering, and technology. Original results are presented on both the fundamental and applied level, accompanied by an ample number of real-world problems and examples emphasizing the interdisciplinary nature and universality of mathematical modeling, and providing an excellent outline of today’s challenges. Mathematical modeling, with applied and computational methods and tools, plays a fundamental role in modern science a...

  19. Dynamic model reduction using data-driven Loewner-framework applied to thermally morphing structures

    Science.gov (United States)

    Phoenix, Austin A.; Tarazaga, Pablo A.

    2017-05-01

    The work herein proposes the use of the data-driven Loewner-framework for reduced order modeling as applied to dynamic Finite Element Models (FEM) of thermally morphing structures. The Loewner-based modeling approach is computationally efficient and accurately constructs reduced models using analytical output data from a FEM. This paper details the two-step process proposed in the Loewner approach. First, a random vibration FEM simulation is used as the input for the development of a Single Input Single Output (SISO) data-based dynamic Loewner state space model. Second, an SVD-based truncation is used on the Loewner state space model, such that the minimal, dynamically representative, state space model is achieved. For this second part, varying levels of reduction are generated and compared. The work herein can be extended to model generation using experimental measurements by replacing the FEM output data in the first step and following the same procedure. This method will be demonstrated on two thermally morphing structures, a rigidly fixed hexapod in multiple geometric configurations and a low mass anisotropic morphing boom. This paper is working to detail the method and identify the benefits of the reduced model methodology.

  20. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    International Nuclear Information System (INIS)

    Nakae, Nobuo; Ozawa, Takayuki; Ohta, Hirokazu; Ogata, Takanari; Sekimoto, Hiroshi

    2014-01-01

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel

  1. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  2. Szekeres models: a covariant approach

    Science.gov (United States)

    Apostolopoulos, Pantelis S.

    2017-05-01

    We exploit the 1  +  1  +  2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  3. Cosmological Model A New Approach

    Directory of Open Access Journals (Sweden)

    Francisco Martnez Flores

    2015-08-01

    Full Text Available ABSTRACT It is shown making use of Special Relativity and applying Doppler Effect thatthe motion of galaxies is not radial but transversal. Linking relativistic energy with Doppler Effect we may explain that the Cosmic Background Radiation is produced by a sufficientely large number of distant galaxies located in accordance with the requirement of homogeneity and isotropy of the Universe. The existence of dark matter can be understood by distinguishing between a real or inertial mass responsible for newtonian Mechanics and Gravitation and a virtual and electromagnetic relativistic mass which it is acceptable by Quantum Theory. The so-called black holes and cosmic scale factor are not following from a correct interpretation through the Schwarzschild and Robertson-Walker metrics respectively which together with the inability to quantize Gravitation introduce more than reasonable doubts about the reliability of the General Theory. The Universe does not expand butis in a steady state which can only be explained in the context of Quantum Theory.

  4. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  5. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    appeal in building systems which operate robustly over a wide range of operating conditions by decomposing them into a number of simplerlinear modelling or control problems, even for nonlinear modelling or control problems. This appeal has been a factor in the development of increasinglypopular `local...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning....... The underlying question is `How should we partition the system - what is `local'?'. This book presents alternative ways of bringing submodels together,which lead to varying levels of performance and insight. Some are further developed for autonomous learning of parameters from data, while others havefocused...

  6. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  7. A new approach to modeling aviation accidents

    Science.gov (United States)

    Rao, Arjun Harsha

    views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520

  8. Applying nonlinear MODM model to supply chain management with quantity discount policy under complex fuzzy environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2014-06-01

    Full Text Available Purpose: The aim of this paper is to deal with the supply chain management (SCM with quantity discount policy under the complex fuzzy environment, which is characterized as the bi-fuzzy variables. By taking into account the strategy and the process of decision making, a bi-fuzzy nonlinear multiple objective decision making (MODM model is presented to solve the proposed problem.Design/methodology/approach: The bi-fuzzy variables in the MODM model are transformed into the trapezoidal fuzzy variables by the DMs's degree of optimism ?1 and ?2, which are de-fuzzified by the expected value index subsequently. For solving the complex nonlinear model, a multi-objective adaptive particle swarm optimization algorithm (MO-APSO is designed as the solution method.Findings: The proposed model and algorithm are applied to a typical example of SCM problem to illustrate the effectiveness. Based on the sensitivity analysis of the results, the bi-fuzzy nonlinear MODM SCM model is proved to be sensitive to the possibility level ?1.Practical implications: The study focuses on the SCM under complex fuzzy environment in SCM, which has a great practical significance. Therefore, the bi-fuzzy MODM model and MO-APSO can be further applied in SCM problem with quantity discount policy.Originality/value: The bi-fuzzy variable is employed in the nonlinear MODM model of SCM to characterize the hybrid uncertain environment, and this work is original. In addition, the hybrid crisp approach is proposed to transferred to model to an equivalent crisp one by the DMs's degree of optimism and the expected value index. Since the MODM model consider the bi-fuzzy environment and quantity discount policy, so this paper has a great practical significance.

  9. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    the Petri model allowed a quick assessment of all potential states but was more cumbersome to build than the MP model. A comparison of approaches...identical state space results. The combined state space graph of the Petri model allowed a quick assessment of all potential states but was more...59 INITIAL DISTRIBUTION LIST ...................................................................................65 ix LIST

  10. The possibilities of applying a risk-oriented approach to the NPP reliability and safety enhancement problem

    Science.gov (United States)

    Komarov, Yu. A.

    2014-10-01

    An analysis and some generalizations of approaches to risk assessments are presented. Interconnection between different interpretations of the "risk" notion is shown, and the possibility of applying the fuzzy set theory to risk assessments is demonstrated. A generalized formulation of the risk assessment notion is proposed in applying risk-oriented approaches to the problem of enhancing reliability and safety in nuclear power engineering. The solution of problems using the developed risk-oriented approaches aimed at achieving more reliable and safe operation of NPPs is described. The results of studies aimed at determining the need (advisability) to modernize/replace NPP elements and systems are presented together with the results obtained from elaborating the methodical principles of introducing the repair concept based on the equipment technical state. The possibility of reducing the scope of tests and altering the NPP systems maintenance strategy is substantiated using the risk-oriented approach. A probabilistic model for estimating the validity of boric acid concentration measurements is developed.

  11. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  12. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    Science.gov (United States)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  13. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  14. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    Science.gov (United States)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  15. Evaluation model applied to TRANSPETRO's Marine Terminals Standardization Program

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de; Mueller, Gabriela [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Garcia, Luciano Maldonado [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes an innovative evaluation model applied to TRANSPETRO's 'Marine Terminals Standardization Program' based on updating approaches of programs evaluation and organizational learning. Since the program was launched in 2004, the need for having an evaluation model able to evaluate its implementation progress, to measure the degree of standards compliance and its potential economic, social and environmental impacts has become evident. Within a vision of safe and environmentally responsible operations of marine terminals, this evaluation model was jointly designed by TRANSPETRO and PUC-Rio to promote continuous improvement and learning in operational practices and in the standardization process itself. TRANSPETRO believes that standardization supports its services and management innovation capability by creating objective and internationally recognized parameters, targets and metrology for its business activities. The conceptual model and application guidelines for this important tool are presented in this paper, as well as the next steps towards its implementation. (author)

  16. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  17. The Lag Model Applied to High Speed Flows

    Science.gov (United States)

    Olsen, Michael E.; Coakley, Thomas J.; Lillard, Randolph P.

    2005-01-01

    The Lag model has shown great promise in prediction of low speed and transonic separations. The predictions of the model, along with other models (Spalart-Allmaras and Menter SST) are assessed for various high speed flowfields. In addition to skin friction and separation predictions, the prediction of heat transfer are compared among these models, and some fundamental building block flowfields, are investigated.

  18. Creating patient value in glaucoma care : applying quality costing and care delivery value chain approaches

    NARCIS (Netherlands)

    D.F. de Korne (Dirk); J.C.A. Sol (Kees); T. Custers (Thomas); E. van Sprundel (Esther); B.M. van Ineveld (Martin); H.G. Lemij (Hans); N.S. Klazinga (Niek)

    2009-01-01

    textabstractPurpose: The purpose of this paper is to explore in a specific hospital care process the applicability in practice of the theories of quality costing and value chains. Design/methodology/approach: In a retrospective case study an in-depth evaluation of the use of a quality cost model

  19. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...... continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling...... provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios....

  20. A multicriteria decision making approach applied to improving maintenance policies in healthcare organizations.

    Science.gov (United States)

    Carnero, María Carmen; Gómez, Andrés

    2016-04-23

    Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a

  1. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  3. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    USACE, Pittsburgh District ( LRP ) requested that the US Army Engineer Research and Development Center, Coastal and ERDC/CHL TR-13-9 2 Hydraulics...approaching the lock and dam. The second set of experiments considered a design, referred to as Plan B lock approach, which contained the weir field in...conditions and model parameters A discharge of 1.35 cfs was set as the inflow boundary condition at the upstream end of the model. The outflow boundary was

  4. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  5. Developing and applying heterogeneous phylogenetic models with XRate.

    Directory of Open Access Journals (Sweden)

    Oscar Westesson

    Full Text Available Modeling sequence evolution on phylogenetic trees is a useful technique in computational biology. Especially powerful are models which take account of the heterogeneous nature of sequence evolution according to the "grammar" of the encoded gene features. However, beyond a modest level of model complexity, manual coding of models becomes prohibitively labor-intensive. We demonstrate, via a set of case studies, the new built-in model-prototyping capabilities of XRate (macros and Scheme extensions. These features allow rapid implementation of phylogenetic models which would have previously been far more labor-intensive. XRate 's new capabilities for lineage-specific models, ancestral sequence reconstruction, and improved annotation output are also discussed. XRate 's flexible model-specification capabilities and computational efficiency make it well-suited to developing and prototyping phylogenetic grammar models. XRate is available as part of the DART software package: http://biowiki.org/DART.

  6. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  7. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  8. Pure and Applied: Christopher Clavius’s Unifying Approach to Jesuit Mathematics Pedagogy

    OpenAIRE

    Price, Audrey

    2017-01-01

    This dissertation examines the pedagogical project of Christopher Clavius (1538-1612) as a key step in the development of modern mathematics. In it, I show that Clavius united two contemporary approaches to mathematics: one that saw the field as an abstract way of discovering universal truths, and one that saw the field as an art, that is a tool for practical purposes. To do so, he combined pure and applied mathematics throughout his textbooks. The union of mathematics as a science and mat...

  9. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  10. Effects produced by oscillations applied to nonlinear dynamic systems: a general approach and examples

    DEFF Research Database (Denmark)

    Blekhman, I. I.; Sorokin, V. S.

    2016-01-01

    A general approach to study effects produced by oscillations applied to nonlinear dynamic systems is developed. It implies a transition from initial governing equations of motion to much more simple equations describing only the main slow component of motions (the vibro-transformed dynamics...... equations). The approach is named as the oscillatory strobodynamics, since motions are perceived as under a stroboscopic light. The vibro-transformed dynamics equations comprise terms that capture the averaged effect of oscillations. The method of direct separation of motions appears to be an efficient...... and simple tool to derive these equations. A modification of the method applicable to study problems that do not imply restrictions on the spectrum of excitation frequencies is proposed. It allows also to abandon other restrictions usually introduced when employing the classical asymptotic methods, e...

  11. Adequateness of applying the Zmijewski model on Serbian companies

    Directory of Open Access Journals (Sweden)

    Pavlović Vladan

    2012-12-01

    Full Text Available The aim of the paper is to determine the accuracy of the prediction of Zmijewski model in Serbia on the eligible sample. At the same time, the paper identifies model's strengths, weaknesses and limitations of its possible application. Bearing in mind that the economic environment in Serbia is not similar to the United States at the time the model was developed, Zmijewski model is surprisingly accurate in the case of Serbian companies. The accuracy was slightly weaker than the model results in the U.S. in its original form, but much better than the results model gave in the U.S. in the period 1988-1991, and 1992-1999. Model gave also better results in Serbia comparing those in Croatia, even in Croatia model was adjusted.

  12. Preconcentration modeling for the optimization of a micro gas preconcentrator applied to environmental monitoring.

    Science.gov (United States)

    Camara, Malick; Breuil, Philippe; Briand, Danick; Viricelle, Jean-Paul; Pijolat, Christophe; de Rooij, Nico F

    2015-04-21

    This paper presents the optimization of a micro gas preconcentrator (μ-GP) system applied to atmospheric pollution monitoring, with the help of a complete modeling of the preconcentration cycle. Two different approaches based on kinetic equations are used to illustrate the behavior of the micro gas preconcentrator for given experimental conditions. The need for high adsorption flow and heating rate and for low desorption flow and detection volume is demonstrated in this paper. Preliminary to this optimization, the preconcentration factor is discussed and a definition is proposed.

  13. Applying a synthetic approach to the resilience of Finnish reindeer herding as a changing livelihood

    Directory of Open Access Journals (Sweden)

    Simo Sarkki

    2016-12-01

    Full Text Available Reindeer herding is an emblematic livelihood for Northern Finland, culturally important for local people and valuable in tourism marketing. We examine the livelihood resilience of Finnish reindeer herding by narrowing the focus of general resilience on social-ecological systems (SESs to a specific livelihood while also acknowledging wider contexts in which reindeer herding is embedded. The questions for specified resilience can be combined with the applied DPSIR approach (Drivers; Pressures: resilience to what; State: resilience of what; Impacts: resilience for whom; Responses: resilience by whom and how. This paper is based on a synthesis of the authors' extensive anthropological fieldwork on reindeer herding and other land uses in Northern Finland. Our objective is to synthesize various opportunities and challenges that underpin the resilience of reindeer herding as a viable livelihood. The DPSIR approach, applied here as a three step procedure, helps focus the analysis on different components of SES and their dynamic interactions. First, various land use-related DPSIR factors and their relations (synergies and trade-offs to reindeer herding are mapped. Second, detailed DPSIR factors underpinning the resilience of reindeer herding are identified. Third, examples of interrelations between DPSIR factors are explored, revealing the key dynamics between Pressures, State, Impacts, and Responses related to the livelihood resilience of reindeer herding. In the Discussion section, we recommend that future applications of the DPSIR approach in examining livelihood resilience should (1 address cumulative pressures, (2 consider the state dimension as more tuned toward the social side of SES, (3 assess both the negative and positive impacts of environmental change on the examined livelihood by a combination of science led top-down and participatory bottom-up approaches, and (4 examine and propose governance solutions as well as local adaptations by

  14. Applying the Job Characteristics Model to the College Education Experience

    Science.gov (United States)

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  15. Applying Orem's self care model in empowering secondary school ...

    African Journals Online (AJOL)

    The purpose of this study was to apply the Orem's self care theory in empowering secondary school girls' knowledge and attitudes towards contraception in Thulamela municipality of Limpopo Province, South Africa. A quantitative descriptive study design was used and respondents were selected by means of convenience ...

  16. A suggested approach to applying IAEA safeguards to plutonium in weapons components

    International Nuclear Information System (INIS)

    Lu, M.S.; Allentuck, J.

    1998-01-01

    It is the announced policy of the United States to make fissile material removed from its nuclear weapons stockpile subject to the US-IAEA voluntary safeguards agreement. Much of this material is plutonium in the form of pits. The application of traditional IAEA safeguards would reveal Restricted Data to unauthorized persons which is prohibited by US law and international treaties. Prior to the availability of a facility for the conversion of the plutonium in the pits to a non-sensitive form this obvious long-term solution to the problem is foreclosed. An alternative near-term approach to applying IAEA safeguards while preserving the necessary degree of confidentiality is required. This paper identifies such an approach. It presents in detail the form of the US declaration; the safeguards objectives which are met; inspection techniques which are utilized and the conclusion which the IAEA could reach concerning the contents of each item and the aggregate of all items. The approach would reveal the number of containers and the aggregate mass of plutonium in a set of n containers presented to the IAEA for verification while protecting data of the isotopic composition and plutonium mass of individual components. The suggested approach provides for traceability from the time the containers are sealed until the conversion of the plutonium to a non-sensitive form

  17. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  18. Applying theory-driven approaches to understanding and modifying clinicians' behavior: what do we know?

    Science.gov (United States)

    Perkins, Matthew B; Jensen, Peter S; Jaccard, James; Gollwitzer, Peter; Oettingen, Gabriele; Pappadopulos, Elizabeth; Hoagwood, Kimberly E

    2007-03-01

    Despite major recent research advances, large gaps exist between accepted mental health knowledge and clinicians' real-world practices. Although hundreds of studies have successfully utilized basic behavioral science theories to understand, predict, and change patients' health behaviors, the extent to which these theories-most notably the theory of reasoned action (TRA) and its extension, the theory of planned behavior (TPB)-have been applied to understand and change clinician behavior is unclear. This article reviews the application of theory-driven approaches to understanding and changing clinician behaviors. MEDLINE and PsycINFO databases were searched, along with bibliographies, textbooks on health behavior or public health, and references from experts, to find article titles that describe theory-driven approaches (TRA or TPB) to understanding and modifying health professionals' behavior. A total of 19 articles that detailed 20 studies described the use of TRA or TPB and clinicians' behavior. Eight articles describe the use of TRA or TPB with physicians, four relate to nurses, three relate to pharmacists, and two relate to health workers. Only two articles applied TRA or TPB to mental health clinicians. The body of work shows that different constructs of TRA or TPB predict intentions and behavior among different groups of clinicians and for different behaviors and guidelines. The number of studies on this topic is extremely limited, but they offer a rationale and a direction for future research as well as a theoretical basis for increasing the specificity and efficiency of clinician-targeted interventions.

  19. Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users

    DEFF Research Database (Denmark)

    Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane

    2011-01-01

    Population ageing is without parallel in human history and the twenty-first century will witness even more rapid ageing than did the century just past. Understanding the user needs of the elderly and how to design better products for this segment of the population is crucial, as it can offer a co...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....... a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study....... To measure the emotional responses of the elderly, a questionnaire was designed and adapted from P.M.A. Desmet’s product-emotion measurement instrument: PrEmo. During the case study it was observed that there were several challenges when carrying out the user survey, and that those challenges particularly...

  20. Applying Model Checking to Industrial-Sized PLC Programs

    CERN Document Server

    AUTHOR|(CDS)2079190; Darvas, Daniel; Blanco Vinuela, Enrique; Tournier, Jean-Charles; Bliudze, Simon; Blech, Jan Olaf; Gonzalez Suarez, Victor M

    2015-01-01

    Programmable logic controllers (PLCs) are embedded computers widely used in industrial control systems. Ensuring that a PLC software complies with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of safety-critical software but is still underused in industry due to the complexity of building and managing formal models of real applications. In this paper, we propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (\\eg CTL, LTL) on PLC programs. This methodology is based on an intermediate model (IM), meant to transform PLC programs written in various standard languages (ST, SFC, etc.) to different modeling languages of verification tools. We present the syntax and semantics of the IM and the transformation rules of the ST and SFC languages to the nuXmv model checker passing through the intermediate model. Finally, two real cases studies of \\CERN PLC programs, written mainly in th...

  1. Polarimetric SAR interferometry applied to land ice: modeling

    DEFF Research Database (Denmark)

    Dall, Jørgen; Papathanassiou, Konstantinos; Skriver, Henning

    2004-01-01

    depths. The validity of the scattering models is examined using L-band polarimetric interferometric SAR data acquired with the EMISAR system over an ice cap located in the percolation zone of the Greenland ice sheet. Radar reflectors were deployed on the ice surface prior to the data acquisition in order......This paper introduces a few simple scattering models intended for the application of polarimetric SAR interfer-ometry to land ice. The principal aim is to eliminate the penetration bias hampering ice sheet elevation maps generated with single-channel SAR interferometry. The polarimetric coherent...... scattering models are similar to the oriented-volume model and the random-volume-over-ground model used in vegetation studies, but the ice models are adapted to the different geometry of land ice. Also, due to compaction, land ice is not uniform; a fact that must be taken into account for large penetration...

  2. Applied exposure modeling for residual radioactivity and release criteria

    International Nuclear Information System (INIS)

    Lee, D.W.

    1989-01-01

    The protection of public health and the environment from the release of materials with residual radioactivity for recycle or disposal as wastes without radioactive contents of concern presents a formidable challenge. Existing regulatory criteria are based on technical judgment concerning detectability and simple modeling. Recently, exposure modeling methodologies have been developed to provide a more consistent level of health protection. Release criteria derived from the application of exposure modeling methodologies share the same basic elements of analysis but are developed to serve a variety of purposes. Models for the support of regulations for all applications rely on conservative interpretations of generalized conditions while models developed to show compliance incorporate specific conditions not likely to be duplicated at other sites. Research models represent yet another type of modeling which strives to simulate the actual behavior of released material. In spite of these differing purposes, exposure modeling permits the application of sound and reasoned principles of radiation protection to the release of materials with residual levels of radioactivity. Examples of the similarities and differences of these models are presented and an application to the disposal of materials with residual levels of uranium contamination is discussed. 5 refs., 2 tabs

  3. Calcium dependent plasticity applied to repetitive transcranial magnetic stimulation with a neural field model.

    Science.gov (United States)

    Wilson, M T; Fung, P K; Robinson, P A; Shemmell, J; Reynolds, J N J

    2016-08-01

    The calcium dependent plasticity (CaDP) approach to the modeling of synaptic weight change is applied using a neural field approach to realistic repetitive transcranial magnetic stimulation (rTMS) protocols. A spatially-symmetric nonlinear neural field model consisting of populations of excitatory and inhibitory neurons is used. The plasticity between excitatory cell populations is then evaluated using a CaDP approach that incorporates metaplasticity. The direction and size of the plasticity (potentiation or depression) depends on both the amplitude of stimulation and duration of the protocol. The breaks in the inhibitory theta-burst stimulation protocol are crucial to ensuring that the stimulation bursts are potentiating in nature. Tuning the parameters of a spike-timing dependent plasticity (STDP) window with a Monte Carlo approach to maximize agreement between STDP predictions and the CaDP results reproduces a realistically-shaped window with two regions of depression in agreement with the existing literature. Developing understanding of how TMS interacts with cells at a network level may be important for future investigation.

  4. FAILURES AND DEFECTS IN THE BUILDING PROCESS – APPLYING THE BOW-TIE APPROACH

    DEFF Research Database (Denmark)

    Jørgensen, Kirsten

    2009-01-01

    site was observed from the very start to the very end and all failures and defects of a certain size were recorded and analysed. The methodological approach used in this analysis was the bow-tie model from the area of safety research. It combines critical-event analysis for both causes and effects...... with event-tree analysis. The paper describes this analytical approach as an introduction to a new concept for understanding failures and defects in construction. Analysing the many critical events in the building process with the bow-tie model visualises the complexity of causes. This visualisation offers...... the possibility for a much more direct and focused discussion of what needs doing, by whom and when – not only to avoid the number of defects in the final product, but also to make the building process flow much better and reduce the need for damage control....

  5. Surface-bounded growth modeling applied to human mandibles

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt; Brookstein, F. L.; Conradsen, Knut

    2000-01-01

    automatically using shape features and a new algorithm called geometry-constrained diffusion. The semilandmarks are mapped into Procrustes space. Principal component analysis extracts a one-dimensional subspace, which is used to construct a linear growth model. The worst case mean modeling error in a cross...

  6. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  7. An electricity billing model | Adetona | Journal of Applied Science ...

    African Journals Online (AJOL)

    Linear regression analysis has been employed to develop a model for predicting accurately the electricity billing for commercial consumers in Ogun State (Nigeria) at faster rate. The electricity billing model was implement-ed, executed and tested using embedded MATLAB function blocks. The correlations between the ...

  8. Applied model for the growth of the daytime mixed layer

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1991-01-01

    A slab model is proposed for developing the height of the mixed layer capped by stable air aloft. The model equations are closed by relating the consumption of energy (potential and kinetic) at the top of the mixed layer to the production of convective and mechanical turbulent kinetic energy with...

  9. Robust model identification applied to type 1diabetes

    DEFF Research Database (Denmark)

    Finan, Daniel Aaron; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2010-01-01

    In many realistic applications, process noise is known to be neither white nor normally distributed. When identifying models in these cases, it may be more effective to minimize a different penalty function than the standard sum of squared errors (as in a least-squares identification method......). This paper investigates model identification based on two different penalty functions: the 1-norm of the prediction errors and a Huber-type penalty function. For data characteristic of some realistic applications, model identification based on these latter two penalty functions is shown to result in more...... accurate estimates of parameters than the standard least-squares solution, and more accurate model predictions for test data. The identification techniques are demonstrated on a simple toy problem as well as a physiological model of type 1 diabetes....

  10. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  11. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  12. A Hybrid Approach to the Valuation of RFID/MEMS technology applied to ordnance inventory

    OpenAIRE

    Doerr, Kenneth H.; Gates, William R.; Mutty, John E.

    2006-01-01

    We report on an analysis of the costs and benefits of fielding Radio Frequency Identification / MicroElectroMechanical System (RFID /MEMS) technology for the management of ordnance inventory. A factorial model of these benefits is proposed. Our valuation approach combines a multi-criteria tool for the valuation of qualitative factors with a monte-carlo simulation of anticipated financial factors. In a sample survey, qualitative factors are shown to account of over half of the anticipated bene...

  13. Applying Within-Family Differences Approaches to Enhance Understanding of the Complexity of Intergenerational Relations.

    Science.gov (United States)

    Suitor, J Jill; Gilligan, Megan; Pillemer, Karl; Fingerman, Karen L; Kim, Kyungmin; Silverstein, Merril; Bengtson, Vern L

    2017-12-15

    The role of family relationships in the lives of older adults has received substantial attention in recent decades. Scholars have increasingly looked beyond simple models of family relations to approaches that recognize the complex and sometimes contradictory nature of these ties. One of the most exciting conceptual and methodological developments is the application of within-family differences approaches. In this paper, we focus on the ways in which such within-family approaches can extend the understanding of patterns and consequences of intergenerational ties in adulthood. Following a review of the conceptual underpinnings of within-family differences approaches, we provide empirical illustrations of these approaches from three projects conducted in the United States: the Family Exchanges Study (FES), the Longitudinal Study of Generations (LSOG), and the Within-Family Differences Study (WFDS). Analyses from the FES, LSOG, and WFDS reveal differences in the consequences of patterns of intergenerational relations found when using within-family compared to between-family approaches. In particular, these analyses demonstrate considerable variation within families that shapes patterns and consequences of parent-adult child ties that is masked when such variations are not taken into account. Within-family differences approaches have been shown to shed new light on intergenerational relations. Despite the value of within-family designs, their use may be limited by the higher investment of finances and time required to implement such studies. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  15. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  16. The Intensive Dysphagia Rehabilitation Approach Applied to Patients With Neurogenic Dysphagia: A Case Series Design Study.

    Science.gov (United States)

    Malandraki, Georgia A; Rajappa, Akila; Kantarcigil, Cagla; Wagner, Elise; Ivey, Chandra; Youse, Kathleen

    2016-04-01

    To examine the effects of the Intensive Dysphagia Rehabilitation approach on physiological and functional swallowing outcomes in adults with neurogenic dysphagia. Intervention study; before-after trial with 4-week follow-up through an online survey. Outpatient university clinics. A consecutive sample of subjects (N=10) recruited from outpatient university clinics. All subjects were diagnosed with adult-onset neurologic injury or disease. Dysphagia diagnosis was confirmed through clinical and endoscopic swallowing evaluations. No subjects withdrew from the study. Participants completed the 4-week Intensive Dysphagia Rehabilitation protocol, including 2 oropharyngeal exercise regimens, a targeted swallowing routine using salient stimuli, and caregiver participation. Treatment included hourly sessions twice per week and home practice for approximately 45 min/d. Outcome measures assessed pre- and posttreatment included airway safety using an 8-point Penetration Aspiration Scale, lingual isometric pressures, self-reported swallowing-related quality of life (QOL), and level of oral intake. Also, patients were monitored for adverse dysphagia-related effects. QOL and adverse effects were also assessed at the 4-week follow-up (online survey). The Intensive Dysphagia Rehabilitation approach was effective in improving maximum and mean Penetration Aspiration Scale scores (PDysphagia Rehabilitation approach was safe and improved physiological and some functional swallowing outcomes in our sample; however, further investigation is needed before it can be widely applied. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Trailing edge noise model applied to wind turbine airfoils

    Energy Technology Data Exchange (ETDEWEB)

    Bertagnolio, F.

    2008-01-15

    The aim of this work is firstly to provide a quick introduction to the theory of noise generation that are relevant to wind turbine technology with focus on trailing edge noise. Secondly, the socalled TNO trailing edge noise model developed by Parchen [1] is described in more details. The model is tested and validated by comparing with other results from the literature. Finally, this model is used in the optimization process of two reference airfoils in order to reduce their noise signature: the RISOE-B1-18 and the S809 airfoils. (au)

  18. The Cheshire Cat principle applied to hybrid bag models

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Wirzba, A.

    1987-05-01

    Here is argued for the Cheshire Cat point of view according to which the bag (itself) has only notational, but no physical significance. It is explained in a 1+1 dimensional exact Cheshire Cat model how a fermion can escape from the bag by means of an anomaly. We also suggest that suitably constructed hybrid bag models may be used to fix such parameters of effective Lagrangians that can otherwise be obtained from experiments only. This idea is illustrated in a calculation of the mass of the pseudoscalar η' meson in 1+1 dimension. Thus there is hope to find a construction principle for a phenomenologically sensible model. (orig.)

  19. Predicting pharmacy students' intention to apply for a residency: A systematic theory of planned behavior approach.

    Science.gov (United States)

    Hickerson, Stephen C; Fleming, Marc L; Sawant, Ruta V; Ordonez, Nancy D; Sansgiry, Sujit S

    The current literature has identified many motivating factors and barriers influencing pharmacy students' decision to apply for residency training. Despite a growing need for residency trained pharmacists to advance the profession, it is not clear why only about one in four pharmacy students decide to pursue a residency, and which of these factors have the most influence on student decision-making. The study examines the factors associated with pharmacy students' intention to apply for a postgraduate residency using the theory of planned behavior (TPB) framework. Second and third-year students from four Texas pharmacy schools were surveyed using an online questionnaire based on the TPB. Descriptive statistics and multiple linear regression analyses were utilized to assess the study objectives. A total of 251 completed responses were received. Attitude, subjective norms (SN), and perceived behavioral control (PBC) were significant predictors of intention to apply for a pharmacy residency (β = 0.32, 0.58, and 0.36, respectively, p social influence of faculty members (β = 0.10, p = 0.003) and family (β = 0.08, p = 0.02); believing financial obligations (β = 0.20, p = 0.006), feeling afraid of the competition and/or not matching (β = 0.24, p make it more difficult to apply for a residency. The TPB model was useful in predicting pharmacy students' intention to apply for a residency, and all TPB constructs were significant predictors. Therefore, interventions that target students' attitude, SN, and PBC may be valuable to increase their intention, especially the specific beliefs identified to significantly predict intention. Future research into methods in which these motivating factors can be encouraged and perceived barriers can be addressed by pharmacy stakeholders will increase interest and participation in residency training. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Applying the social relations model to self and peer evaluations

    NARCIS (Netherlands)

    Greguras, G.J.; Robie, C.; Born, M.Ph.

    2001-01-01

    Peer evaluations of performance increasingly are being used to make organizational decisions and to provide individuals with performance related feedback. Using Kenny's social relations model (SRM), data from 14 teams of undergraduate students who completed performance ratings of themselves and

  1. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  2. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...

  3. Applying Functional Modeling for Accident Management of Nucler Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigates applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...

  4. Applying Time Series Analysis Model to Temperature Data in Greenhouses

    Directory of Open Access Journals (Sweden)

    Abdelhafid Hasni

    2011-03-01

    Full Text Available The objective of the research is to find an appropriate Seasonal Auto-Regressive Integrated Moving Average (SARIMA Model for fitting the inside air temperature (Tin of a naturally ventilated greenhouse under Mediterranean conditions by considering the minimum of Akaike Information Criterion (AIC. The results of fitting were as follows: the best SARIMA Model for fitting air temperature of greenhouse is SARIMA (1,0,0 (1,0,224.

  5. Applying OGC Standards to Develop a Land Surveying Measurement Model

    Directory of Open Access Journals (Sweden)

    Ioannis Sofos

    2017-02-01

    Full Text Available The Open Geospatial Consortium (OGC is committed to developing quality open standards for the global geospatial community, thus enhancing the interoperability of geographic information. In the domain of sensor networks, the Sensor Web Enablement (SWE initiative has been developed to define the necessary context by introducing modeling standards, like ‘Observation & Measurement’ (O&M and services to provide interaction like ‘Sensor Observation Service’ (SOS. Land surveying measurements on the other hand comprise a domain where observation information structures and services have not been aligned to the OGC observation model. In this paper, an OGC-compatible, aligned to the ‘Observation and Measurements’ standard, model for land surveying observations has been developed and discussed. Furthermore, a case study instantiates the above model, and an SOS implementation has been developed based on the 52° North SOS platform. Finally, a visualization schema is used to produce ‘Web Map Service (WMS’ observation maps. Even though there are elements that differentiate this work from classic ‘O&M’ modeling cases, the proposed model and flows are developed in order to provide the benefits of standardizing land surveying measurement data (cost reducing by reusability, higher precision level, data fusion of multiple sources, raw observation spatiotemporal repository access, development of Measurement-Based GIS (MBGIS to the geoinformation community.

  6. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    Muhammad Zaka Emad

    2017-07-24

    Jul 24, 2017 ... pulse is applied as a stress history on the CRF stope. Blast wave data obtained from the on-site monitoring are very complex. It requires processing before interpreting and using it for numerical models. Generally, mining compa- nies hire geophysics experts for interpretation of such data. The blast wave ...

  7. APPLIED DIAGNOSTIC MODULE FOR DETERMINING COGNITIVE MODEL PARAMETERS OF SUBJECTS OF EDUCATION IN AN ADAPTIVE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Anatoly N. Vetrov

    2017-01-01

    Full Text Available Abstract. Objectives To increase the functional efficiency of information and educational environments created by automated training systems by realising individually oriented formation of knowledge using adaptive generation of heterogeneous educational influences based on an innovative block of parametric cognitive models and a set of programs to support the automation of research tasks. Method System analysis and modeling of the information and educational environment. In the process of automating the diagnosis of the individual personality characteristics of the subject of education, each method of investigation determines the input: localisation of research method, name of block of questions (subtest, textual explanatory content, formulation of question and answer variants, nominal value of the time interval for displaying the formulation of the question, as well as the graphical accompaniment of a specific question and answers thereto. Results The applied diagnostic module acts as a component of the automated learning system with adaptation properties on the basis of the innovative block of parametric cognitive models. The training system implements the generation of an ordered sequence of informational and educational influences that reflect the content of the subject of a study. Conclusion The applied diagnostic module is designed to automate the study of physiological, psychological and linguistic parameters of the cognitive model of the subject of education to provide a systematic analysis of the information and educational environment and the realisation of adaptive generation of educational influences by using training automation approaches that allow the individual characteristics of trainees to be taken into account. 

  8. Applying artificial vision models to human scene understanding.

    Science.gov (United States)

    Aminoff, Elissa M; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective-the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)-have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN-the models that best accounted for the patterns obtained from PPA and TOS-were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method-NEIL ("Never-Ending-Image-Learner"), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes-showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.

  9. A Genetic Algorithm Approach for Modeling a Grounding Electrode

    Science.gov (United States)

    Mishra, Arbind Kumar; Nagaoka, Naoto; Ametani, Akihiro

    This paper has proposed a genetic algorithm based approach to determine a grounding electrode model circuit composed of resistances, inductances and capacitances. The proposed methodology determines the model circuit parameters based on a general ladder circuit directly from a measured result. Transient voltages of some electrodes were measured when applying a step like current. An EMTP simulation of a transient voltage on the grounding electrode has been carried out by adopting the proposed model circuits. The accuracy of the proposed method has been confirmed to be high in comparison with the measured transient voltage.

  10. Applying a Systems Approach to Monitoring and Assessing Climate Change Mitigation Potential in Mexico's Forest Sector

    Science.gov (United States)

    Olguin-Alvarez, M. I.; Wayson, C.; Fellows, M.; Birdsey, R.; Smyth, C.; Magnan, M.; Dugan, A.; Mascorro, V.; Alanís, A.; Serrano, E.; Kurz, W. A.

    2017-12-01

    Since 2012, the Mexican government through its National Forestry Commission, with support from the Commission for Environmental Cooperation, the Forest Services of Canada and USA, the SilvaCarbon Program and research institutes in Mexico, has made important progress towards the use of carbon dynamics models ("gain-loss" approach) for greenhouse gas (GHG) emissions monitoring and projections into the future. Here we assess the biophysical mitigation potential of policy alternatives identified by the Mexican Government (e.g. net zero deforestation rate, sustainable forest management) based on a systems approach that models carbon dynamics in forest ecosystems, harvested wood products and substitution benefits in two contrasting states of Mexico. We provide key messages and results derived from the use of the Carbon Budget Model of the Canadian Forest Sector and a harvested wood products model, parameterized with input data from Mexicós National Forest Monitoring System (e.g. forest inventories, remote sensing, disturbance data). The ultimate goal of this tri-national effort is to develop data and tools for carbon assessment in strategic landscapes in North America, emphasizing the need to include multiple sectors and types of collaborators (scientific and policy-maker communities) to design more comprehensive portfolios for climate change mitigation in accordance with the Paris Agreement of the United Nation Framework Convention on Climate Change (e.g. Mid-Century Strategy, NDC goals).

  11. GIS-Based Population Model Applied to Nevada Transportation Routes

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.

    1999-01-01

    Recently, a model based on geographic information system (GIS) processing of US Census Block data has made high-resolution population analysis for transportation risk analysis technically and economically feasible. Population density bordering each kilometer of a route may be tabulated with specific route sections falling into each of three categories (Rural, Suburban or Urban) identified for separate risk analysis. In addition to the improvement in resolution of Urban areas along a route, the model provides a statistically-based correction to population densities in Rural and Suburban areas where Census Block dimensions may greatly exceed the 800-meter scale of interest. A semi-automated application of the GIS model to a subset of routes in Nevada (related to the Yucca Mountain project) are presented, and the results compared to previous models including a model based on published Census and other data. These comparisons demonstrate that meaningful improvement in accuracy and specificity of transportation risk analyses is dependent on correspondingly accurate and geographically-specific population density data

  12. Hidden multidimensional social structure modeling applied to biased social perception

    Science.gov (United States)

    Maletić, Slobodan; Zhao, Yi

    2018-02-01

    Intricacies of the structure of social relations are realized by representing a collection of overlapping opinions as a simplicial complex, thus building latent multidimensional structures, through which agents are, virtually, moving as they exchange opinions. The influence of opinion space structure on the distribution of opinions is demonstrated by modeling consensus phenomena when the opinion exchange between individuals may be affected by the false consensus effect. The results indicate that in the cases with and without bias, the road toward consensus is influenced by the structure of multidimensional space of opinions, and in the biased case, complete consensus is achieved. The applications of proposed modeling framework can easily be generalized, as they transcend opinion formation modeling.

  13. A new inverse regression model applied to radiation biodosimetry

    Science.gov (United States)

    Higueras, Manuel; Puig, Pedro; Ainsbury, Elizabeth A.; Rothkamm, Kai

    2015-01-01

    Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples. PMID:25663804

  14. An approach for quantifying small effects in regression models.

    Science.gov (United States)

    Bedrick, Edward J; Hund, Lauren

    2018-04-01

    We develop a novel approach for quantifying small effects in regression models. Our method is based on variation in the mean function, in contrast to methods that focus on regression coefficients. Our idea applies in diverse settings such as testing for a negligible trend and quantifying differences in regression functions across strata. Straightforward Bayesian methods are proposed for inference. Four examples are used to illustrate the ideas.

  15. Agent-based modelling in applied ethology: an exploratory case study of behavioural dynamics in tail biting in pigs

    NARCIS (Netherlands)

    Boumans, I.J.M.M.; Hofstede, G.J.; Bolhuis, J.E.; Boer, de I.J.M.; Bokkers, E.A.M.

    2016-01-01

    Understanding behavioural dynamics in pigs is important to assess pig welfare in current intensive pig production systems. Agent-based modelling (ABM) is an approach to gain insight into behavioural dynamics in pigs, but its use in applied ethology and animal welfare science has been limited so far.

  16. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey......-box models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey...

  17. Extended Hildebrand solubility approach applied to some structurally related sulfonamides in ethanol + water mixtures

    Directory of Open Access Journals (Sweden)

    Daniel R. Delgado

    2016-01-01

    Full Text Available Extended Hildebrand Solubility Approach (EHSA was applied to evaluate the solubility of sulfadiazine, sulfamerazine, and sulfamethazine in some ethanol + water mixtures at 298.15 K. Reported experimental equilibrium solubilities and some fusion properties of these drugs were used for the calculations. In particular, a good predictive character of EHSA (with mean deviations lower than 3.0% has been found by using regular polynomials in order four correlating the interaction parameter W with the Hildebrand solubility parameter of solvent mixtures without drug. However, the predictive character of EHSA was the same as that obtained by direct correlation of drug solubilities with the same descriptor of polarity of the cosolvent mixtures.

  18. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  19. Applying the knowledge creation model to the management of ...

    African Journals Online (AJOL)

    In present-day society, the need to manage indigenous knowledge is widely recognised. However, there is a debate in progress on whether or not indigenous knowledge can be easily managed. The purpose of this paper is to examine the possibility of using knowledge management models like knowledge creation theory ...

  20. Applying the social relations model to self and peer evaluations

    NARCIS (Netherlands)

    G.J. Greguras; C. Robie; M.Ph. Born (Marise)

    2001-01-01

    textabstractPeer evaluations of performance increasingly are being used to make organizational decisions and to provide individuals with performance related feedback. Using Kenny’s social relations model (SRM), data from 14 teams of undergraduate students who completed performance ratings of

  1. Applying the elastic model for various nucleus-nucleus fusion

    International Nuclear Information System (INIS)

    HASSAN, G.S.; RAGAB, H.S.; SEDDEEK, M.K.

    2000-01-01

    The Elastic Model of two free parameters m,d given by Scalia has been used for wider energy regions to fit the available experimental data for potential barriers and cross sections. In order to generalize Scalia's formula in both sub- and above-barrier regions, we calculated m, d for pairs rather than those given by Scalia and compared the calculated cross sections with the experimental data. This makes a generalization of the Elastic Model in describing fusion process. On the other hand, Scalia's range of interacting systems was 24 ≤ A ≤194 where A is the compound nucleus mass number. Our extension of that model includes an example of the pairs of A larger than his final limit aiming to make it as a general formula for any type of reactants: light, intermediate or heavy systems. A significant point is the comparison of Elastic Model calculations with the well known methods studying complete fusion and compound nucleus formation, namely with the resultants of using Proximity potential with either Sharp or Smooth cut-off approximations

  2. Applying an Employee-Motivation Model to Prevent Student Plagiarism.

    Science.gov (United States)

    Malouff, John M.; Sims, Randi L.

    1996-01-01

    A model based on Vroom's expectancy theory of employee motivation posits that instructors can prevent plagiarism by ensuring that students understand the rules of ethical writing, expect assignments to be manageable and have personal benefits, and expect plagiarism to be difficult and have important personal costs. (SK)

  3. Dynamics Model Applied to Pricing Options with Uncertain Volatility

    Directory of Open Access Journals (Sweden)

    Lorella Fatone

    2012-01-01

    model is proposed. The data used to test the calibration problem included observations of asset prices over a finite set of (known equispaced discrete time values. Statistical tests were used to estimate the statistical significance of the two parameters of the Black-Scholes model: the volatility and the drift. The effects of these estimates on the option pricing problem were investigated. In particular, the pricing of an option with uncertain volatility in the Black-Scholes framework was revisited, and a statistical significance was associated with the price intervals determined using the Black-Scholes-Barenblatt equations. Numerical experiments involving synthetic and real data were presented. The real data considered were the daily closing values of the S&P500 index and the associated European call and put option prices in the year 2005. The method proposed here for calibrating the Black-Scholes dynamics model could be extended to other science and engineering models that may be expressed in terms of stochastic dynamical systems.

  4. Existing Soil Carbon Models Do Not Apply to Forested Wetlands

    Science.gov (United States)

    Carl C. Trettin; B. Song; M.F. Jurgensen; C. Li

    2001-01-01

    When assessing the biological,geological,and chemical cycling of nutrients and elements — or when assessing carbon dynamics with respect to global change — modeling and simulation are necessary. Although wetlands occupy a relatively small proportion of Earth’s terrestrial surface (

  5. Applying Hybrid Heuristic Approach to Identify Contaminant Source Information in Transient Groundwater Flow Systems

    Directory of Open Access Journals (Sweden)

    Hund-Der Yeh

    2014-01-01

    Full Text Available Simultaneous identification of the source location and release history in aquifers is complicated and time-consuming if the release of groundwater contaminant source varies in time. This paper presents an approach called SATSO-GWT to solve complicated source release problems which contain the unknowns of three location coordinates and several irregular release periods and concentrations. The SATSO-GWT combines with ordinal optimization algorithm (OOA, roulette wheel approach, and a source identification algorithm called SATS-GWT. The SATS-GWT was developed based on simulated annealing, tabu search, and three-dimensional groundwater flow and solute transport model MD2K-GWT. The OOA and roulette wheel method are utilized mainly to reduce the size of feasible solution domain and accelerate the identification of the source information. A hypothetic site with one contaminant source location and two release periods is designed to assess the applicability of the present approach. The results indicate that the performance of SATSO-GWT is superior to that of SATS-GWT. In addition, the present approach works very effectively in dealing with the cases which have different initial guesses of source location and measurement errors in the monitoring points as well as problems with large suspicious areas and several source release periods and concentrations.

  6. A generic approach to haptic modeling of textile artifacts

    Science.gov (United States)

    Shidanshidi, H.; Naghdy, F.; Naghdy, G.; Wood Conroy, D.

    2009-08-01

    Haptic Modeling of textile has attracted significant interest over the last decade. In spite of extensive research, no generic system has been proposed. The previous work mainly assumes that textile has a 2D planar structure. They also require time-consuming measurement of textile properties in construction of the mechanical model. A novel approach for haptic modeling of textile is proposed to overcome the existing shortcomings. The method is generic, assumes a 3D structure for the textile, and deploys computational intelligence to estimate the mechanical properties of textile. The approach is designed primarily for display of textile artifacts in museums. The haptic model is constructed by superimposing the mechanical model of textile over its geometrical model. Digital image processing is applied to the still image of textile to identify its pattern and structure through a fuzzy rule-base algorithm. The 3D geometric model of the artifact is automatically generated in VRML based on the identified pattern and structure obtained from the textile image. Selected mechanical properties of the textile are estimated by an artificial neural network; deploying the textile geometric characteristics and yarn properties as inputs. The estimated mechanical properties are then deployed in the construction of the textile mechanical model. The proposed system is introduced and the developed algorithms are described. The validation of method indicates the feasibility of the approach and its superiority to other haptic modeling algorithms.

  7. A systematic approach for fine-tuning of fuzzy controllers applied to WWTPs

    DEFF Research Database (Denmark)

    Ruano, M.V.; Ribes, J.; Sin, Gürkan

    2010-01-01

    A systematic approach for fine-tuning fuzzy controllers has been developed and evaluated for an aeration control system implemented in a WWTR The challenge with the application of fuzzy controllers to WWTPs is simply that they contain many parameters, which need to be adjusted for different WWTP...... applications. To this end, a methodology based on model simulations is used that employs three statistical methods: (i) Monte-Carlo procedure: to find proper initial conditions, (ii) Identifiability analysis: to find an identifiable parameter subset of the fuzzy controller and (iii) minimization algorithm......: to fine-tune the identifiable parameter subset of the controller. Indeed, the initial location found by Monte-Carlo simulations provided better results than using trial and error approach when identifying parameters of the fuzzy controller. The identifiable subset was reduced to 4 parameters from a total...

  8. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class...... through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM) are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens......, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics...

  9. Motor fuel demand analysis - applied modelling in the European union

    International Nuclear Information System (INIS)

    Chorazewiez, S.

    1998-01-01

    Motor fuel demand in Europe amounts to almost half of petroleum products consumption and to thirty percent of total final energy consumption. This study considers, Firstly, the energy policies of different European countries and the ways in which the consumption of motor gasoline and automotive gas oil has developed. Secondly it provides an abstract of demand models in the energy sector, illustrating their specific characteristics. Then it proposes an economic model of automotive fuel consumption, showing motor gasoline and automotive gas oil separately over a period of thirty years (1960-1993) for five main countries in the European Union. Finally, forecasts of consumption of gasoline and diesel up to the year 2020 are given for different scenarios. (author)

  10. Applying ecological modeling to parenting for Australian refugee families.

    Science.gov (United States)

    Grant, Julian; Guerin, Pauline B

    2014-10-01

    Children in families with parents from refugee backgrounds are often viewed as a vulnerable group with increased risks of developing physical or psychological problems. However, there is very little research regarding the strategies that parents might use to parent their children in a new country while they also manage the interrelated challenges of poverty, social isolation, maternal stress, and mental ill health that often go along with resettlement. We explore the application of ecological modeling, specifically at individual, institutional, and policy levels, within an Australian context to critique the factors that shape the development of parenting capacity within refugee families settling in a new Western country. Ecological modeling enables examination of how public policy at local state and national levels influences the individual and family directly and through the organizations that are given the task of implementing many of the policy recommendations. Recommendations for health practice and research are made. © The Author(s) 2014.

  11. Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden

    International Nuclear Information System (INIS)

    Hill, M.

    2001-01-01

    This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework

  12. Applying CIPP Model for Learning-Object Management

    Science.gov (United States)

    Morgado, Erla M. Morales; Peñalvo, Francisco J. García; Martín, Carlos Muñoz; Gonzalez, Miguel Ángel Conde

    Although knowledge management process needs to receive some evaluation in order to determine their suitable functionality. There is not a clear definition about the stages where LOs need to be evaluated and the specific metrics to continuously promote their quality. This paper presents a proposal for LOs evaluation during their management for e-learning systems. To achieve this, we suggest specific steps for LOs design, implementation and evaluation into the four stages proposed by CIPP model (Context, Input, Process, Product).

  13. Market Manipulation? Applying the Propaganda Model to Financial Media Reporting

    OpenAIRE

    Thompson, Peter A

    2009-01-01

    Herman and Chomsky’s Propaganda Model (PM) has emphasized how the various ‘filters’ can lead to news reports misrepresenting the vested political and economic interests that underpin US foreign policy. However, there has been relatively little attention paid to the implications of the PM for media operations in another key dimension of capitalism: financial markets. Traders require timely and accurate information about changing market conditions. However, financial news announcements influenc...

  14. Bayesian network modeling applied to coastal geomorphology: lessons learned from a decade of experimentation and application

    Science.gov (United States)

    Plant, N. G.; Thieler, E. R.; Gutierrez, B.; Lentz, E. E.; Zeigler, S. L.; Van Dongeren, A.; Fienen, M. N.

    2016-12-01

    We evaluate the strengths and weaknesses of Bayesian networks that have been used to address scientific and decision-support questions related to coastal geomorphology. We will provide an overview of coastal geomorphology research that has used Bayesian networks and describe what this approach can do and when it works (or fails to work). Over the past decade, Bayesian networks have been formulated to analyze the multi-variate structure and evolution of coastal morphology and associated human and ecological impacts. The approach relates observable system variables to each other by estimating discrete correlations. The resulting Bayesian-networks make predictions that propagate errors, conduct inference via Bayes rule, or both. In scientific applications, the model results are useful for hypothesis testing, using confidence estimates to gage the strength of tests while applications to coastal resource management are aimed at decision-support, where the probabilities of desired ecosystems outcomes are evaluated. The range of Bayesian-network applications to coastal morphology includes emulation of high-resolution wave transformation models to make oceanographic predictions, morphologic response to storms and/or sea-level rise, groundwater response to sea-level rise and morphologic variability, habitat suitability for endangered species, and assessment of monetary or human-life risk associated with storms. All of these examples are based on vast observational data sets, numerical model output, or both. We will discuss the progression of our experiments, which has included testing whether the Bayesian-network approach can be implemented and is appropriate for addressing basic and applied scientific problems and evaluating the hindcast and forecast skill of these implementations. We will present and discuss calibration/validation tests that are used to assess the robustness of Bayesian-network models and we will compare these results to tests of other models. This will

  15. Applying a Virtual Economy Model in Mexico's Oil Sector

    International Nuclear Information System (INIS)

    Baker, G.

    1994-01-01

    The state of Mexico's oil industry, including the accomplishments of Pemex, Mexico's national oil company, was discussed, with particular reference to the progress made in the period of 1988-1994, and the outlook for innovations in the post-Salinas era. The concept of an evolutionary trend from a command economy (State as sole producer), towards market (State as regulator) or mixed economies (State as business partner) in developing countries, was introduced, placing Pemex within this evolutionary model as moving away from centralized control of oil production and distribution, while achieving international competitiveness. The concept of ''virtual market economy'' was also discussed. This model contains the legal basis of a command economy, while instituting modernization programs in order to stimulate market-economic conditions. This type of economy was considered particularly useful in this instance, sine it would allow Pemex units to operate within international performance and price benchmarks while maintaining state monopoly. Specific details of how Pemex could transform itself to a virtual market economy were outlined. It was recommended that Pemex experiment with the virtual mixed economy model; in essence, making the state a co-producer, co-transporter, and co-distributor of hydrocarbons. The effects of such a move would be to bring non-debt funding to oil and gas production, transmission, and associated industrial activities

  16. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  17. Mathematical model of gas plasma applied to chronic wounds

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. G.; Liu, X. Y.; Liu, D. W.; Lu, X. P. [State Key Lab of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, WuHan, HuBei 430074 (China); Zhang, Y. T. [Shandong Provincial Key Lab of UHV Technology and Gas Discharge Physics, School of Electrical Engineering, Shandong University, Jinan, Shandong Province 250061 (China)

    2013-11-15

    Chronic wounds are a major burden for worldwide health care systems, and patients suffer pain and discomfort from this type of wound. Recently gas plasmas have been shown to safely speed chronic wounds healing. In this paper, we develop a deterministic mathematical model formulated by eight-species reaction-diffusion equations, and use it to analyze the plasma treatment process. The model follows spatial and temporal concentration within the wound of oxygen, chemoattractants, capillary sprouts, blood vessels, fibroblasts, extracellular matrix material, nitric oxide (NO), and inflammatory cell. Two effects of plasma, increasing NO concentration and reducing bacteria load, are considered in this model. The plasma treatment decreases the complete healing time from 25 days (normal wound healing) to 17 days, and the contributions of increasing NO concentration and reducing bacteria load are about 1/4 and 3/4, respectively. Increasing plasma treatment frequency from twice to three times per day accelerates healing process. Finally, the response of chronic wounds of different etiologies to treatment with gas plasmas is analyzed.

  18. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Directory of Open Access Journals (Sweden)

    Simon T Maddock

    Full Text Available Mitochondrial genome (mitogenome sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent to produce seven (near- complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  19. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    Science.gov (United States)

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  20. Applying A Multi-Objective Based Procedure to SWAT Modelling in Alpine Catchments

    Science.gov (United States)

    Tuo, Y.; Disse, M.; Chiogna, G.

    2017-12-01

    In alpine catchments, water management practices can lead to conflicts between upstream and downstream stakeholders, like in the Adige river basin (Italy). A correct prediction of available water resources plays an important part, for example, in defining how much water can be stored for hydropower production in upstream reservoirs without affecting agricultural activities downstream. Snow is a crucial hydrological component that highly affects seasonal behavior of streamflow. Therefore, a realistic representation of snow dynamics is fundamental for water management operations in alpine catchments. The Soil and Water Assessment Tool (SWAT) model has been applied in alpine catchments worldwide. However, during model calibration of catchment scale applications, snow parameters were generally estimated based on streamflow records rather than on snow measurements. This may lead to streamflow predictions with wrong snow melt contribution. This work highlights the importance of considering snow measurements in the calibration of the SWAT model for alpine hydrology and compares various calibration methodologies. In addition to discharge records, snow water equivalent time series of both subbasin scale and monitoring station were also utilized to evaluate the model performance by comparing with the SWAT subbasin and elevation band snow outputs. Comparing model results obtained calibrating the model using discharge data only and discharge data along with snow water equivalent data, we show that the latter approach allows us to improve the reliability of snow simulations while maintaining good estimations of streamflow. With a more reliable representation of snow dynamics, the hydrological model can provide more accurate references for proposing adequate water management solutions. This study offers to the wide SWAT user community an effective approach to improve streamflow predictions in alpine catchments and hence support decision makers in water allocation.

  1. Modeling external constraints: Applying expert systems to nuclear plants

    International Nuclear Information System (INIS)

    Beck, C.E.; Behera, A.K.

    1993-01-01

    Artificial Intelligence (AI) applications in nuclear plants have received much attention over the past decade. Specific applications that have been addressed include development of models and knowledge-bases, plant maintenance, operations, procedural guidance, risk assessment, and design tools. This paper examines the issue of external constraints, with a focus on the use of Al and expert systems as design tools. It also provides several suggested methods for addressing these constraints within the Al framework. These methods include a State Matrix scheme, a layered structure for the knowledge base, and application of the dynamic parameter concept

  2. Morphometric model of lymphocyte as applied to scanning flow cytometry

    Science.gov (United States)

    Loiko, Valery A.; Ruban, Gennady I.; Gritsai, Olga A.; Gruzdev, Alexey D.; Kosmacheva, Svetlana M.; Goncharova, Natalia V.; Miskevich, Alexander A.

    2006-11-01

    The peripheral blood lymphocytes of normal individuals are investigated by methods of specialized light microscopy. Lymphocytes as a whole and T-cell subpopulation are considered. Lymphocyte structure is characterized with reference to polarizing scanning flow cytometry. The lymphocyte and lymphocyte nucleus shapes are analyzed. Linear correlation dependence between sizes of lymphocyte and its nucleus is indicated. A morphometric model of a lymphocyte is constructed using the obtained data. The findings can be used, for instance, as input parameters to solve the direct and inverse light-scattering problems of turbidimetry, nephelometry, and flow cytometry.

  3. Inverse geothermal modelling applied to Danish sedimentary basins

    DEFF Research Database (Denmark)

    Poulsen, Soren E.; Balling, Niels; Bording, Thue S.

    2017-01-01

    . The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature...... gradients to depths of 2000-3000 m are generally around 25-30. degrees C km(-1), locally up to about 35. degrees C km(-1). Large regions have geothermal reservoirs with characteristic temperatures ranging from ca. 40-50. degrees C, at 1000-1500 m depth, to ca. 80-110. degrees C, at 2500-3500 m, however...

  4. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model......-based synthesis method must employ models at lower levels of aggregation and through combination rules for phenomena, generate (synthesize) new intensified unit operations. An efficient solution procedure for the synthesis problem is needed to tackle the potentially large number of options that would be obtained...

  5. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  6. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model.

    Science.gov (United States)

    Zuniga-Teran, Adriana A; Orr, Barron J; Gimblett, Randy H; Chalfoun, Nader V; Guertin, David P; Marsh, Stuart E

    2017-01-13

    Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire ( n = 486) distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation) representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  7. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-01-01

    Full Text Available Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire (n = 486 distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  8. Applying the Health Belief Model to college students' health behavior

    Science.gov (United States)

    Kim, Hak-Seon; Ahn, Joo

    2012-01-01

    The purpose of this research was to investigate how university students' nutrition beliefs influence their health behavioral intention. This study used an online survey engine (Qulatrics.com) to collect data from college students. Out of 253 questionnaires collected, 251 questionnaires (99.2%) were used for the statistical analysis. Confirmatory Factor Analysis (CFA) revealed that six dimensions, "Nutrition Confidence," "Susceptibility," "Severity," "Barrier," "Benefit," "Behavioral Intention to Eat Healthy Food," and "Behavioral Intention to do Physical Activity," had construct validity; Cronbach's alpha coefficient and composite reliabilities were tested for item reliability. The results validate that objective nutrition knowledge was a good predictor of college students' nutrition confidence. The results also clearly showed that two direct measures were significant predictors of behavioral intentions as hypothesized. Perceived benefit of eating healthy food and perceived barrier for eat healthy food to had significant effects on Behavioral Intentions and was a valid measurement to use to determine Behavioral Intentions. These findings can enhance the extant literature on the universal applicability of the model and serve as useful references for further investigations of the validity of the model within other health care or foodservice settings and for other health behavioral categories. PMID:23346306

  9. Mathematical Modeling Applied to Prediction of Landslides in Southern Brazil

    Science.gov (United States)

    Silva, Lúcia; Araújo, João; Braga, Beatriz; Fernandes, Nelson

    2013-04-01

    Mass movements are natural phenomena that occur on the slopes and are important agents working in landscape development. These movements have caused serious damage to infrastructure and properties. In addition to the mass movements occurring in natural slopes, there is also a large number of accidents induced by human action in the landscape. The change of use and land cover for the introduction of agriculture is a good example that have affected the stability of slopes. Land use and/or land cover changes have direct and indirect effects on slope stability and frequently represent a major factor controlling the occurrence of man-induced mass movements. In Brazil, especially in the southern and southeastern regions, areas of original natural rain forest have been continuously replaced by agriculture during the last decades, leading to important modifications in soil mechanical properties and to major changes in hillslope hydrology. In these regions, such effects are amplified due to the steep hilly topography, intense summer rainfall events and dense urbanization. In November 2008, a major landslide event took place in a rural area with intensive agriculture in the state of Santa Catarina (Morro do Baú) where many catastrophic landslides were triggered after a long rainy period. In this area, the natural forest has been replaced by huge banana and pine plantations. The state of Santa Catarina in recent decades has been the scene of several incidents of mass movements such as this catastrophic event. In this study, based on field mapping and modeling, we characterize the role played by geomorphological and geological factors in controlling the spatial distribution of landslides in the Morro do Baú area. In order to attain such objective, a digital elevation model of the basin was generated with a 10m grid in which the topographic parameters were obtained. The spatial distribution of the scars from this major event was mapped from another image, obtained immediately

  10. An assessment of econometric models applied to fossil fuel power generation

    International Nuclear Information System (INIS)

    Gracceva, F.; Quercioli, R.

    2001-01-01

    The main purpose of this report is to provide a general view of those studies, in which the econometric approach is applied to the selection of fuel in fossil fired power generation, focusing the attention to the key role played by the fuel prices. The report consists of a methodological analysis and a survey of the studies available in literature. The methodological analysis allows to assess the adequateness of the econometric approach, in the electrical power utilities policy. With this purpose, the fundamentals of microeconomics, which are the basis of the econometric models, are pointed out and discussed, and then the hypotheses, which are needed to be assumed for complying the economic theory, are verified in their actual implementation in the power generation sector. The survey of the available studies provides a detailed description of the Translog and Logit models, and the results achieved with their application. From these results, the estimated models show to fit the data with good approximation, a certain degree of interfuel substitution and a meaningful reaction to prices on demand side [it

  11. Virtual building environments (VBE) - Applying information modeling to buildings

    Energy Technology Data Exchange (ETDEWEB)

    Bazjanac, Vladimir

    2004-06-21

    A Virtual Building Environment (VBE) is a ''place'' where building industry project staffs can get help in creating Building Information Models (BIM) and in the use of virtual buildings. It consists of a group of industry software that is operated by industry experts who are also experts in the use of that software. The purpose of a VBE is to facilitate expert use of appropriate software applications in conjunction with each other to efficiently support multidisciplinary work. This paper defines BIM and virtual buildings, and describes VBE objectives, set-up and characteristics of operation. It informs about the VBE Initiative and the benefits from a couple of early VBE projects.

  12. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    Science.gov (United States)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  13. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  14. [The bioethical principlism model applied in pain management].

    Science.gov (United States)

    Souza, Layz Alves Ferreira; Pessoa, Ana Paula da Costa; Barbosa, Maria Alves; Pereira, Lilian Varanda

    2013-03-01

    An integrative literature review was developed with the purpose to analyze the scientific production regarding the relationships between pain and the principles of bioethics (autonomy, beneficence, nonmaleficence and justice). Controlled descriptors were used in three international data sources (LILACS, SciELO, MEDLINE), in April of 2012, totaling 14 publications categorized by pain and autonomy, pain and beneficence, pain and nonmaleficence, pain and justice. The adequate relief of pain is a human right and a moral issue directly related with the bioethical principlism standard model (beneficence, non-maleficence, autonomy and justice). However, many professionals overlook the pain of their patients, ignoring their ethical role when facing suffering. It was concluded that principlism has been neglected in the care of patients in pain, showing the need for new practices to change this setting.

  15. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  16. Nonspherical Radiation Driven Wind Models Applied to Be Stars

    Science.gov (United States)

    Arauxo, F. X.

    1990-11-01

    ABSTRACT. In this work we present a model for the structure of a radiatively driven wind in the meridional plane of a hot star. Rotation effects and simulation of viscous forces were included in the motion equations. The line radiation force is considered with the inclusion of the finite disk correction in self-consistent computations which also contain gravity darkening as well as distortion of the star by rotation. An application to a typical BlV star leads to mass-flux ratios between equator and pole of the order of 10 and mass loss rates in the range 5.l0 to Mo/yr. Our envelope models are flattened towards the equator and the wind terminal velocities in that region are rather high (1000 Km/s). However, in the region near the star the equatorial velocity field is dominated by rotation. RESUMEN. Se presenta un modelo de la estructura de un viento empujado radiativamente en el plano meridional de una estrella caliente. Se incluyeron en las ecuaciones de movimiento los efectos de rotaci6n y la simulaci6n de fuerzas viscosas. Se consider6 la fuerza de las lineas de radiaci6n incluyendo la correcci6n de disco finito en calculos autoconsistentes los cuales incluyen oscurecimiento gravitacional asi como distorsi6n de la estrella por rotaci6n. La aplicaci6n a una estrella tipica BlV lleva a cocientes de flujo de masa entre el ecuador y el polo del orden de 10 de perdida de masa en el intervalo 5.l0 a 10 Mo/ano. Nuestros modelos de envolvente estan achatados hacia el ecuador y las velocidads terminales del viento en esa regi6n son bastante altas (1000 Km/s). Sin embargo, en la regi6n cercana a la estrella el campo de velocidad ecuatorial esta dominado por la rotaci6n. Key words: STARS-BE -- STARS-WINDS

  17. Applying quantitative structure–activity relationship approaches to nanotoxicology: Current status and future potential

    International Nuclear Information System (INIS)

    Winkler, David A.; Mombelli, Enrico; Pietroiusti, Antonio; Tran, Lang; Worth, Andrew; Fadeel, Bengt; McCall, Maxine J.

    2013-01-01

    The potential (eco)toxicological hazard posed by engineered nanoparticles is a major scientific and societal concern since several industrial sectors (e.g. electronics, biomedicine, and cosmetics) are exploiting the innovative properties of nanostructures resulting in their large-scale production. Many consumer products contain nanomaterials and, given their complex life-cycle, it is essential to anticipate their (eco)toxicological properties in a fast and inexpensive way in order to mitigate adverse effects on human health and the environment. In this context, the application of the structure–toxicity paradigm to nanomaterials represents a promising approach. Indeed, according to this paradigm, it is possible to predict toxicological effects induced by chemicals on the basis of their structural similarity with chemicals for which toxicological endpoints have been previously measured. These structure–toxicity relationships can be quantitative or qualitative in nature and they can predict toxicological effects directly from the physicochemical properties of the entities (e.g. nanoparticles) of interest. Therefore, this approach can aid in prioritizing resources in toxicological investigations while reducing the ethical and monetary costs that are related to animal testing. The purpose of this review is to provide a summary of recent key advances in the field of QSAR modelling of nanomaterial toxicity, to identify the major gaps in research required to accelerate the use of quantitative structure–activity relationship (QSAR) methods, and to provide a roadmap for future research needed to achieve QSAR models useful for regulatory purposes

  18. Applying attachment theory to effective practice with hard-to-reach youth: the AMBIT approach.

    Science.gov (United States)

    Bevington, Dickon; Fuggle, Peter; Fonagy, Peter

    2015-01-01

    Adolescent Mentalization-Based Integrative Treatment (AMBIT) is a developing approach to working with "hard-to-reach" youth burdened with multiple co-occurring morbidities. This article reviews the core features of AMBIT, exploring applications of attachment theory to understand what makes young people "hard to reach," and provide routes toward increased security in their attachment to a worker. Using the theory of the pedagogical stance and epistemic ("pertaining to knowledge") trust, we show how it is the therapeutic worker's accurate mentalizing of the adolescent that creates conditions for new learning, including the establishment of alternative (more secure) internal working models of helping relationships. This justifies an individual keyworker model focused on maintaining a mentalizing stance toward the adolescent, but simultaneously emphasizing the critical need for such keyworkers to remain well connected to their wider team, avoiding activation of their own attachment behaviors. We consider the role of AMBIT in developing a shared team culture (shared experiences, shared language, shared meanings), toward creating systemic contexts supportive of such relationships. We describe how team training may enhance the team's ability to serve as a secure base for keyworkers, and describe an innovative approach to treatment manualization, using a wiki format as one way of supporting this process.

  19. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  20. Optical Neural Network Models Applied To Logic Program Execution

    Science.gov (United States)

    Stormon, Charles D.

    1988-05-01

    Logic programming is being used extensively by Artificial Intelligence researchers to solve problems including natural language processing and expert systems. These languages, of which Prolog is the most widely used, promise to revolutionize software engineering, but much greater performance is needed. Researchers have demonstrated the applicability of neural network models to the solution of certain NP-complete problems, but these methods are not obviously applicable to the execution of logic programs. This paper outlines the use of neural networks in four aspects of the logic program execution cycle, and discusses results of a simulation of three of these. Four neural network functional units are described, called the substitution agent, the clause filter, the structure processor, and the heuristics generator, respectively. Simulation results suggest that the system described may provide several orders of magnitude improvement in execution speed for large logic programs. However, practical implementation of the proposed architecture will require the application of optical computing techniques due to the large number of neurons required, and the need for massive, adaptive connectivity.

  1. Applying revised gap analysis model in measuring hotel service quality.

    Science.gov (United States)

    Lee, Yu-Cheng; Wang, Yu-Che; Chien, Chih-Hung; Wu, Chia-Huei; Lu, Shu-Chiung; Tsai, Sang-Bing; Dong, Weiwei

    2016-01-01

    With the number of tourists coming to Taiwan growing by 10-20 % since 2010, the number has increased due to an increasing number of foreign tourists, particularly after deregulation allowed admitting tourist groups, followed later on by foreign individual tourists, from mainland China. The purpose of this study is to propose a revised gap model to evaluate and improve service quality in Taiwanese hotel industry. Thus, service quality could be clearly measured through gap analysis, which was more effective for offering direction in developing and improving service quality. The HOLSERV instrument was used to identify and analyze service gaps from the perceptions of internal and external customers. The sample for this study included three main categories of respondents: tourists, employees, and managers. The results show that five gaps influenced tourists' evaluations of service quality. In particular, the study revealed that Gap 1 (management perceptions vs. customer expectations) and Gap 9 (service provider perceptions of management perceptions vs. service delivery) were more critical than the others in affecting perceived service quality, making service delivery the main area of improvement. This study contributes toward an evaluation of the service quality of the Taiwanese hotel industry from the perspectives of customers, service providers, and managers, which is considerably valuable for hotel managers. It was the aim of this study to explore all of these together in order to better understand the possible gaps in the hotel industry in Taiwan.

  2. Assessing risk factors for dental caries: a statistical modeling approach.

    Science.gov (United States)

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  3. Modeling the QSAR of ACE-Inhibitory Peptides with ANN and Its Applied Illustration

    Directory of Open Access Journals (Sweden)

    Ronghai He

    2012-01-01

    Full Text Available A quantitative structure-activity relationship (QSAR model of angiotensin-converting enzyme- (ACE- inhibitory peptides was built with an artificial neural network (ANN approach based on structural or activity data of 58 dipeptides (including peptide activity, hydrophilic amino acids content, three-dimensional shape, size, and electrical parameters, the overall correlation coefficient of the predicted versus actual data points is =0.928, and the model was applied in ACE-inhibitory peptides preparation from defatted wheat germ protein (DWGP. According to the QSAR model, the C-terminal of the peptide was found to have principal importance on ACE-inhibitory activity, that is, if the C-terminal is hydrophobic amino acid, the peptide's ACE-inhibitory activity will be high, and proteins which contain abundant hydrophobic amino acids are suitable to produce ACE-inhibitory peptides. According to the model, DWGP is a good protein material to produce ACE-inhibitory peptides because it contains 42.84% of hydrophobic amino acids, and structural information analysis from the QSAR model showed that proteases of Alcalase and Neutrase were suitable candidates for ACE-inhibitory peptides preparation from DWGP. Considering higher DH and similar ACE-inhibitory activity of hydrolysate compared with Neutrase, Alcalase was finally selected through experimental study.

  4. A nonlinear adaptive backstepping approach applied to a three phase PWM AC-DC converter feeding induction heating

    Science.gov (United States)

    Hadri-Hamida, A.; Allag, A.; Hammoudi, M. Y.; Mimoune, S. M.; Zerouali, S.; Ayad, M. Y.; Becherif, M.; Miliani, E.; Miraoui, A.

    2009-04-01

    This paper presents a new control strategy for a three phase PWM converter, which consists of applying an adaptive nonlinear control. The input-output feedback linearization approach is based on the exact cancellation of the nonlinearity, for this reason, this technique is not efficient, because system parameters can vary. First a nonlinear system modelling is derived with state variables of the input current and the output voltage by using power balance of the input and output, the nonlinear adaptive backstepping control can compensate the nonlinearities in the nominal system and the uncertainties. Simulation results are obtained using Matlab/Simulink. These results show how the adaptive backstepping law updates the system parameters and provide an efficient control design both for tracking and regulation in order to improve the power factor.

  5. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  6. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  7. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  8. A simple model for fatigue crack growth in concrete applied to a hinge beam model

    DEFF Research Database (Denmark)

    Skar, Asmus; Poulsen, Peter Noe; Olesen, John Forbes

    2017-01-01

    In concrete structures, fatigue is one of the major causes of material deterioration. Repeated loads result in formation of cracks. Propagation of these cracks cause internal progressive damage within the concrete material which ultimately leads to failure. This paper presents a simplified general...... concept for non-linear analysis of concrete subjected to cyclic loading. The model is based on the fracture mechanics concepts of the fictitious crack model, considering a fiber of concrete material, and a simple energy based approach for estimating the bridging stress under cyclic loading. Further...

  9. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  10. Applying Distributed Constraint Optimization Approach to the User Association Problem in Heterogeneous Networks.

    Science.gov (United States)

    Duan, Peibo; Zhang, Changsheng; Mao, Guoqiang; Zhang, Bin

    2017-09-22

    User association has emerged as a distributed resource allocation problem in the heterogeneous networks (HetNets). Although an approximate solution is obtainable using the approaches like combinatorial optimization and game theory-based schemes, these techniques can be easily trapped in local optima. Furthermore, the lack of exploring the relation between the quality of the solution and the parameters in the HetNet [e.g., the number of users and base stations (BSs)], at what levels, impairs the practicability of deploying these approaches in a real world environment. To address these issues, this paper investigates how to model the problem as a distributed constraint optimization problem (DCOP) from the point of the view of the multiagent system. More specifically, we develop two models named each connection as variable (ECAV) and each BS and user as variable (EBUAV). Hereinafter, we propose a DCOP solver which not only sets up the model in a distributed way but also enables us to efficiently obtain the solution by means of a complete DCOP algorithm based on distributed message-passing. Naturally, both theoretical analysis and simulation show that different qualitative solutions can be obtained in terms of an introduced parameter η which has a close relation with the parameters in the HetNet. It is also apparent that there is 6% improvement on the throughput by the DCOP solver comparing with other counterparts when η=3. Particularly, it demonstrates up to 18% increase in the ability to make BSs service more users when the number of users is above 200 while the available resource blocks (RBs) are limited. In addition, it appears that the distribution of RBs allocated to users by BSs is better with the variation of the volume of RBs at the macro BS.

  11. An approach to applying quality assurance to nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Cooper, R.B.; Abel, R.

    1996-12-01

    An approach to developing and applying a quality assurance program for a nuclear fuel waste disposal facility is described. The proposed program would be based on N286-series standards used for quality assurance programs in nuclear power plants, and would cover all aspects of work across all stages of the project, from initial feasibility studies to final closure of the vault. A quality assurance manual describing the overall quality assurance program and its elements would be prepared at the outset. Planning requirements of the quality assurance program would be addressed in a comprehensive plan for the project. Like the QA manual, this plan would be prepared at the outset of the project and updated at each stage. Particular attention would be given to incorporating the observational approach in procedures for underground engineering, where the ability to adapt designs and mining techniques to changing ground conditions would be essential. Quality verification requirements would be addressed through design reviews, peer reviews, inspections and surveillance, equipment calibration and laboratory analysis checks, and testing programs. Regular audits and program reviews would help to assess the state of implementation, degree of conformance to standards, and effectiveness of the quality assurance program. Audits would be particularly useful in assessing the quality systems of contractors and suppliers, and in verifying the completion of work at the end of stages. Since a nuclear fuel waste disposal project would span a period of about 90 years, a key function of the quality assurance program would be to ensure the continuity of knowledge and the transfer of experience from one stage to another This would be achieved by maintaining a records management system throughout the life of the project, by ensuring that work procedures were documented and kept current with new technologies and practices, and by instituting training programs that made use of experience gained

  12. An Effective Risk Minimization Strategy Applied to an Outdoor Music Festival: A Multi-Agency Approach.

    Science.gov (United States)

    Luther, Matt; Gardiner, Fergus; Lenson, Shane; Caldicott, David; Harris, Ryan; Sabet, Ryan; Malloy, Mark; Perkins, Jo

    2018-04-01

    Specific Event Identifiers a. Event type: Outdoor music festival. b. Event onset date: December 3, 2016. c. Location of event: Regatta Point, Commonwealth Park. d. Geographical coordinates: Canberra, Australian Capital Territory (ACT), Australia (-35.289002, 149.131957, 600m). e. Dates and times of observation in latitude, longitude, and elevation: December 3, 2016, 11:00-23:00. f. Response type: Event medical support. Abstract Introduction Young adult patrons are vulnerable to risk-taking behavior, including drug taking, at outdoor music festivals. Therefore, the aim of this field report is to discuss the on-site medical response during a music festival, and subsequently highlight observed strategies aimed at minimizing substance abuse harm. The observed outdoor music festival was held in Canberra (Australian Capital Territory [ACT], Australia) during the early summer of 2016, with an attendance of 23,008 patrons. First aid and on-site medical treatment data were gained from the relevant treatment area and service. The integrated first aid service provided support to 292 patients. Final analysis consisted of 286 patients' records, with 119 (41.6%) males and 167 (58.4%) females. Results from this report indicated that drug intoxication was an observed event issue, with 15 (5.1%) treated on site and 13 emergency department (ED) presentations, primarily related to trauma or medical conditions requiring further diagnostics. This report details an important public health need, which could be met by providing a coordinated approach, including a robust on-site medical service, accepting intrinsic risk-taking behavior. This may include on-site drug-checking, providing reliable information on drug content with associated education. Luther M , Gardiner F , Lenson S , Caldicott D , Harris R , Sabet R , Malloy M , Perkins J . An effective risk minimization strategy applied to an outdoor music festival: a multi-agency approach. Prehosp Disaster Med. 2018;33(2):220-224.

  13. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  14. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  15. Applying horizontal diffusion on pressure surface to mesoscale models on terrain-following coordinates

    Science.gov (United States)

    Hann-Ming Henry Juang; Ching-Teng Lee; Yongxin Zhang; Yucheng Song; Ming-Chin Wu; Yi-Leng Chen; Kevin Kodama; Shyh-Chin Chen

    2005-01-01

    The National Centers for Environmental Prediction regional spectral model and mesoscale spectral model (NCEP RSM/MSM) use a spectral computation on perturbation. The perturbation is defined as a deviation between RSM/MSM forecast value and their outer model or analysis value on model sigma-coordinate surfaces. The horizontal diffusion used in the models applies...

  16. An interdisciplinary and experimental approach applied to an analysis of the communication of influence

    Directory of Open Access Journals (Sweden)

    Brigitte JUANALS

    2013-07-01

    Full Text Available This paper describes the added value of an interdisciplinary and experimental approach applied to an analysis of the inter-organizational communication of influence. The field analyzed is the international industrial standardization of societal security. A communicational problem has been investigated with an experimental method based on natural language processing and knowledge management tools. The purpose of the methodological framework is to clarify the way international standards are designed and the policies that are supported by these standards. Furthermore, strategies of influence of public and private stakeholders involved in the NGOs which produce these texts have also been studied. The means of inter-organizational communication between organizations (companies or governmental authorities and NGOs can be compared to the lobbying developed in the context of the construction of Europe and globalization. Understanding the prescriptive process has become a crucial issue for States, organizations and citizens. This research contributes to the critical assessment of the new industrial policies currently being developed from the point of view of their characteristics and the way they have been designed.

  17. Lactic Acid Bacteria Selection for Biopreservation as a Part of Hurdle Technology Approach Applied on Seafood

    Directory of Open Access Journals (Sweden)

    Norman Wiernasz

    2017-05-01

    Full Text Available As fragile food commodities, microbial, and organoleptic qualities of fishery and seafood can quickly deteriorate. In this context, microbial quality and security improvement during the whole food processing chain (from catch to plate, using hurdle technology, a combination of mild preserving technologies such as biopreservation, modified atmosphere packaging, and superchilling, are of great interest. As natural flora and antimicrobial metabolites producers, lactic acid bacteria (LAB are commonly studied for food biopreservation. Thirty-five LAB known to possess interesting antimicrobial activity were selected for their potential application as bioprotective agents as a part of hurdle technology applied to fishery products. The selection approach was based on seven criteria including antimicrobial activity, alteration potential, tolerance to chitosan coating, and superchilling process, cross inhibition, biogenic amines production (histamine, tyramine, and antibiotics resistance. Antimicrobial activity was assessed against six common spoiling bacteria in fishery products (Shewanella baltica, Photobacterium phosphoreum, Brochothrix thermosphacta, Lactobacillus sakei, Hafnia alvei, Serratia proteamaculans and one pathogenic bacterium (Listeria monocytogenes in co-culture inhibitory assays miniaturized in 96-well microtiter plates. Antimicrobial activity and spoilage evaluation, both performed in cod and salmon juice, highlighted the existence of sensory signatures and inhibition profiles, which seem to be species related. Finally, six LAB with no unusual antibiotics resistance profile nor histamine production ability were selected as bioprotective agents for further in situ inhibitory assays in cod and salmon based products, alone or in combination with other hurdles (chitosan, modified atmosphere packing, and superchilling.

  18. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts

    Science.gov (United States)

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-01

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  19. An approach to primary prevention from the aspect of applied physiology.

    Science.gov (United States)

    Frenkl, R; Pavlik, G

    1999-01-01

    The main reason for our decreasing population number--a most remarkable indicator of the inadequacy of our health culture--is the high rate of overall mortality. In its background one finds a number of risk factors of high prevalence, such as hypertension disease, addiction pathology, reduced stress tolerance as well as physical and psychic inactivity. Patterns of life that are positive are scarce and as yet not attractive or efficient. The spirit of primary prevention is yet far from permeating medicine; the most the clinical side did realize has been a recognition of the population's need for regular medical screenings. A completely new approach that involves prevention programs embracing the whole of society, and an elaboration of new strategies are badly needed to achieve a desirable change in the present set of values. One of the already available remedies is to give full and science-based support to the positive life patterns in our culture, for instance by demonstrating how physiology can be applied to human life, by putting the latter within a broader scope, namely that of psychophysiology and social psychology. In this framework the elements to be discussed are such aspects of culture as dietary habits, physical exercise, and mental and sexual hygiene. Placing greater emphasis on sports and intense habitual physical exercise can promote a healthier lifestyle, above all in our youth.

  20. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    Science.gov (United States)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  1. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    Science.gov (United States)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  2. Study of fusion mechanism of halo nuclear 11Be+208Pb by applying QMD model

    International Nuclear Information System (INIS)

    Wang Ning; Li Zhuxia

    2001-01-01

    The authors have studied the fusion reaction for 11 Be + 208 Pb near barrier by applying QMD model, and find that in t he fusion reaction induced by halo nuclei there simultaneously exist two mechanisms competing with each other. On one hand, 11 Be is a weakly bound nuclear system and is easily broke up caused by the interaction with target, when it approaches to target, so the fusion cross section is suppressed. On the other hand, several neutrons of 11 Be transfer into 208 Pb and interact with 208 Pb to cause the local radius of 208 Pb increase and result in an enhancement of fusion cross section. The fusion cross sections calculated show an enhancement near barrier, and the calculated results agree with the experimental data reasonably well

  3. Residential Demand Response Behaviour Modeling applied to Cyber-physical Intrusion Detection

    DEFF Research Database (Denmark)

    Heussen, Kai; Tyge, Emil; Kosek, Anna Magdalena

    2017-01-01

    A real-time demand response system can be viewed as a cyber-physical system, with physical systems dependent on cyber infrastructure for coordination and control, which may be vulnerable to cyber-attacks. The time domain dynamic behaviour of individual residential demand responses is governed...... by a mix of physical system parameters, exogenous influences, user behaviour and preferences, which can be characterized by unstructured models such as a time-varying finite impulse response. In this study, which is based on field data, it is shown how this characteristic response behaviours can...... be identified and how the characterization can be updated continuously. Finally, we propose an approach to apply this behaviour characterization to the identification of anomalous and potentially malicious behaviour modifications as part of a cyber-physical intrusion detection mechanism....

  4. Knowledge Creation and Conversion in Military Organizations: How the SECI Model is Applied Within Armed Forces

    Directory of Open Access Journals (Sweden)

    Andrzej Lis

    2014-01-01

    Full Text Available The aim of the paper is to analyze the knowledge creation and conversion processes in military organizations using the SECI model as a framework. First of all, knowledge creation activities in military organizations are identified and categorized. Then, knowledge socialization, externalization, combination and internalization processes are analyzed. The paper studies methods, techniques and tools applied by NATO and the U.S. Army to support the aforementioned processes. As regards the issue of knowledge socialization, counseling, coaching, mentoring and communities of practice are discussed. Lessons Learned systems and After Action Reviews illustrate the military approaches to knowledge externalization. Producing doctrines in the process of operational standardization is presented as a solution used by the military to combine knowledge in order to codify it. Finally, knowledge internalization through training and education is explored.

  5. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  6. Applying an orographic precipitation model to improve mass balance modeling of the Juneau Icefield, AK

    Science.gov (United States)

    Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.

    2017-12-01

    Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with

  7. Comparative flood damage model assessment: towards a European approach

    Science.gov (United States)

    Jongman, B.; Kreibich, H.; Apel, H.; Barredo, J. I.; Bates, P. D.; Feyen, L.; Gericke, A.; Neal, J.; Aerts, J. C. J. H.; Ward, P. J.

    2012-12-01

    There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth-damage functions) and exposure (i.e. asset values), whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  8. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  9. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    Science.gov (United States)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  10. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  11. APPLYING PROFESSIONALLY ORIENTED PROBLEMS OF MATHEMATICAL MODELING IN TEACHING STUDENTS OF ENGINEERING DEPARTMENTS

    Directory of Open Access Journals (Sweden)

    Natal’ya Yur’evna Gorbunova

    2017-06-01

    Full Text Available We described several aspects of organizing student research work, as well as solving a number of mathematical modeling problems: professionally-oriented, multi-stage, etc. We underlined the importance of their economic content. Samples of using such problems in teaching Mathematics at agricultural university were given. Several questions connected with information material selection and peculiarities of research problems application were described. Purpose. The author aims to show the possibility and necessity of using professionally-oriented problems of mathematical modeling in teaching Mathematics at agricultural university. The subject of analysis is including such problems into educational process. Methodology. The main research method is dialectical method of obtaining knowledge of finding approaches to selection, writing and using mathematical modeling and professionally-oriented problems in educational process; the methodology is study of these methods of obtaining knowledge. Results. As a result of analysis of literature, students opinions, observation of students work, and taking into account personal teaching experience, it is possible to make conclusion about importance of using mathematical modeling problems, as it helps to systemize theoretical knowledge, apply it to practice, raise students study motivation in engineering sphere. Practical implications. Results of the research can be of interest for teachers of Mathematics in preparing Bachelor and Master students of engineering departments of agricultural university both for theoretical research and for modernization of study courses.

  12. Power to the People! Meta-algorithmic modelling in applied data science

    NARCIS (Netherlands)

    Spruit, M.; Jagesar, R.

    2016-01-01

    This position paper first defines the research field of applied data science at the intersection of domain expertise, data mining, and engineering capabilities, with particular attention to analytical applications. We then propose a meta-algorithmic approach for applied data science with societal

  13. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  14. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect and design for learning

    Directory of Open Access Journals (Sweden)

    Evangelia Triantafyllou

    2016-05-01

    Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance

  15. Worldline approach to the Grosse-Wulkenhaar model

    Science.gov (United States)

    Viñas, Sebastián Franchino; Pisani, Pablo

    2014-11-01

    We apply the worldline formalism to the Grosse-Wulkenhaar model and obtain an expression for the one-loop effective action which provides an efficient way for computing Schwinger functions in this theory. Using this expression we obtain the quantum corrections to the effective background and the β-functions, which are known to vanish at the self-dual point. The case of degenerate noncommutativity is also considered. Our main result can be straightforwardly applied to any polynomial self-interaction of the scalar field and we consider that the worldline approach could be useful for studying effective actions of noncommutative gauge fields as well as in other non-local models or in higher-derivative field theories.

  16. Developing a parameterization approach of soil erodibility for the Rangeland Hydrology and Erosion Model (RHEM)

    Science.gov (United States)

    Soil erodibility is a key factor for estimating soil erosion using physically based models. In this study, a new parameterization approach for estimating erodibility was developed for the Rangeland Hydrology and Erosion Model (RHEM). The approach uses empirical equations that were developed by apply...

  17. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  18. Model Proposition for the Fiscal Policies Analysis Applied in Economic Field

    Directory of Open Access Journals (Sweden)

    Larisa Preda

    2007-05-01

    Full Text Available This paper presents a study about fiscal policy applied in economic development. Correlations between macroeconomics and fiscal indicators signify the first steep in our analysis. Next step is a new model proposal for the fiscal and budgetary choices. This model is applied on the date of the Romanian case.

  19. Improving Hydrological Models by Applying Air Mass Boundary Identification in a Precipitation Phase Determination Scheme

    Science.gov (United States)

    Feiccabrino, James; Lundberg, Angela; Sandström, Nils

    2013-04-01

    Many hydrological models determine precipitation phase using surface weather station data. However, there are a declining number of augmented weather stations reporting manually observed precipitation phases, and a large number of automated observing systems (AOS) which do not report precipitation phase. Automated precipitation phase determination suffers from low accuracy in the precipitation phase transition zone (PPTZ), i.e. temperature range -1° C to 5° C where rain, snow and mixed precipitation is possible. Therefore, it is valuable to revisit surface based precipitation phase determination schemes (PPDS) while manual verification is still widely available. Hydrological and meteorological approaches to PPDS are vastly different. Most hydrological models apply surface meteorological data into one of two main PPDS approaches. The first is a single rain/snow threshold temperature (TRS), the second uses a formula to describe how mixed precipitation phase changes between the threshold temperatures TS (below this temperature all precipitation is considered snow) and TR (above this temperature all precipitation is considered rain). However, both approaches ignore the effect of lower tropospheric conditions on surface precipitation phase. An alternative could be to apply a meteorological approach in a hydrological model. Many meteorological approaches rely on weather balloon data to determine initial precipitation phase, and latent heat transfer for the melting or freezing of precipitation falling through the lower troposphere. These approaches can improve hydrological PPDS, but would require additional input data. Therefore, it would be beneficial to link expected lower tropospheric conditions to AOS data already used by the model. In a single air mass, rising air can be assumed to cool at a steady rate due to a decrease in atmospheric pressure. When two air masses meet, warm air is forced to ascend the more dense cold air. This causes a thin sharp warming (frontal

  20. Modeling and numerical simulation of the dynamics of nanoparticles applied to free and confined atmospheres

    International Nuclear Information System (INIS)

    Devilliers, Marion

    2012-01-01

    It is necessary to adapt existing models in order to simulate the number concentration, and correctly account for nanoparticles, in both free and confined atmospheres. A model of particle dynamics capable of following accurately the number as well as the mass concentration of particles, with an optimal calculation time, has been developed. The dynamics of particles depends on various processes, the most important ones being condensation/evaporation, followed by nucleation, coagulation, and deposition phenomena. These processes are well-known for fine and coarse particles, but some additional phenomena must be taken into account when applied to nanoparticles, such as the Kelvin effect for condensation/evaporation and the van der Waals forces for coagulation. This work focused first on condensation/evaporation, which is the most numerically challenging process. Particles were assumed to be of spherical shape. The Kelvin effect has been taken into account as it becomes significant for particles with diameter below 50 nm. The numerical schemes are based on a sectional approach: the particle size range is discretized in sections characterized by a representative diameter. A redistribution algorithm is used, after condensation/ evaporation occurred, in order to keep the representative diameter between the boundaries of the section. The redistribution can be conducted in terms of mass or number. The key point in such algorithms is to choose which quantity has to be redistributed over the fixed sections. We have developed a hybrid algorithm that redistributes the relevant quantity for each section. This new approach has been tested and shows significant improvements with respect to most existing models over a wide range of conditions. The process of coagulation for nanoparticles has also been solved with a sectional approach. Coagulation is monitored by the Brownian motion of nanoparticles. This approach is shown to be more efficient if the coagulation rate is evaluated

  1. Features of applying systems approach for evaluating the reliability of cryogenic systems for special purposes

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. The analysis of cryogenic installations confirms objective regularity of increase in amount of the tasks solved by systems of a special purpose. One of the most important directions of development of a cryogenics is creation of installations for air separation product receipt, namely oxygen and nitrogen. Modern aviation complexes require use of these gases in large numbers as in gaseous, and in the liquid state. The onboard gas systems applied in aircraft of the Russian Federation are subdivided on: oxygen system; air (nitric system; system of neutral gas; fire-proof system. Technological schemes ADI are in many respects determined by pressure of compressed air or, in a general sense, a refrigerating cycle. For the majority ADI a working body of a refrigerating cycle the divided air is, that is technological and refrigerating cycles in installation are integrated. By this principle differentiate installations: low pressure; average and high pressure; with detander; with preliminary chilling. There is also insignificant number of the ADI types in which refrigerating and technological cycles are separated. These are installations with external chilling. For the solution of tasks of control of technical condition of the BRV hardware in real time and estimates of indicators of reliability it is offered to use multi-agent technologies. Multi-agent approach is the most acceptable for creation of SPPR for reliability assessment as allows: to redistribute processing of information on elements of system that leads to increase in overall performance; to solve a problem of accumulating, storage and recycling of knowledge that will allow to increase significantly efficiency of the solution of tasks of an assessment of reliability; to considerably reduce intervention of the person in process of functioning of system that will save time of the person of the making decision (PMD and will not demand from it special skills of work with it.

  2. Hybrid sequencing approach applied to human fecal metagenomic clone libraries revealed clones with potential biotechnological applications.

    Directory of Open Access Journals (Sweden)

    Mária Džunková

    Full Text Available Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be "domesticated" for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7-15 kb cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts.

  3. Applying an Ensemble Classification Tree Approach to the Prediction of Completion of a 12-Step Facilitation Intervention with Stimulant Abusers

    Science.gov (United States)

    Doyle, Suzanne R.; Donovan, Dennis M.

    2014-01-01

    Aims The purpose of this study was to explore the selection of predictor variables in the evaluation of drug treatment completion using an ensemble approach with classification trees. The basic methodology is reviewed and the subagging procedure of random subsampling is applied. Methods Among 234 individuals with stimulant use disorders randomized to a 12-Step facilitative intervention shown to increase stimulant use abstinence, 67.52% were classified as treatment completers. A total of 122 baseline variables were used to identify factors associated with completion. Findings The number of types of self-help activity involvement prior to treatment was the predominant predictor. Other effective predictors included better coping self-efficacy for substance use in high-risk situations, more days of prior meeting attendance, greater acceptance of the Disease model, higher confidence for not resuming use following discharge, lower ASI Drug and Alcohol composite scores, negative urine screens for cocaine or marijuana, and fewer employment problems. Conclusions The application of an ensemble subsampling regression tree method utilizes the fact that classification trees are unstable but, on average, produce an improved prediction of the completion of drug abuse treatment. The results support the notion there are early indicators of treatment completion that may allow for modification of approaches more tailored to fitting the needs of individuals and potentially provide more successful treatment engagement and improved outcomes. PMID:25134038

  4. Computer Modeling of Violent Intent: A Content Analysis Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  5. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  6. The ABC's of Suicide Risk Assessment: Applying a Tripartite Approach to Individual Evaluations.

    Directory of Open Access Journals (Sweden)

    Keith M Harris

    Full Text Available There is considerable need for accurate suicide risk assessment for clinical, screening, and research purposes. This study applied the tripartite affect-behavior-cognition theory, the suicidal barometer model, classical test theory, and item response theory (IRT, to develop a brief self-report measure of suicide risk that is theoretically-grounded, reliable and valid. An initial survey (n = 359 employed an iterative process to an item pool, resulting in the six-item Suicidal Affect-Behavior-Cognition Scale (SABCS. Three additional studies tested the SABCS and a highly endorsed comparison measure. Studies included two online surveys (Ns = 1007, and 713, and one prospective clinical survey (n = 72; Time 2, n = 54. Factor analyses demonstrated SABCS construct validity through unidimensionality. Internal reliability was high (α = .86-.93, split-half = .90-.94. The scale was predictive of future suicidal behaviors and suicidality (r = .68, .73, respectively, showed convergent validity, and the SABCS-4 demonstrated clinically relevant sensitivity to change. IRT analyses revealed the SABCS captured more information than the comparison measure, and better defined participants at low, moderate, and high risk. The SABCS is the first suicide risk measure to demonstrate no differential item functioning by sex, age, or ethnicity. In all comparisons, the SABCS showed incremental improvements over a highly endorsed scale through stronger predictive ability, reliability, and other properties. The SABCS is in the public domain, with this publication, and is suitable for clinical evaluations, public screening, and research.

  7. Plenary lecture: innovative modeling approaches applicable to risk assessments.

    Science.gov (United States)

    Oscar, T P

    2011-06-01

    Proper identification of safe and unsafe food at the processing plant is important for maximizing the public health benefit of food by ensuring both its consumption and safety. Risk assessment is a holistic approach to food safety that consists of four steps: 1) hazard identification; 2) exposure assessment; 3) hazard characterization; and 4) risk characterization. Risk assessments are modeled by mapping the risk pathway as a series of unit operations and associated pathogen events and then using probability distributions and a random sampling method to simulate the rare, random, variable and uncertain nature of pathogen events in the risk pathway. To model pathogen events, a rare event modeling approach is used that links a discrete distribution for incidence of the pathogen event with a continuous distribution for extent of the pathogen event. When applied to risk assessment, rare event modeling leads to the conclusion that the most highly contaminated food at the processing plant does not necessarily pose the highest risk to public health because of differences in post-processing risk factors among distribution channels and consumer populations. Predictive microbiology models for individual pathogen events can be integrated with risk assessment models using the rare event modeling method. Published by Elsevier Ltd.

  8. Safety constraints applied to an adaptive Bayesian condition-based maintenance optimization model

    International Nuclear Information System (INIS)

    Flage, Roger; Coit, David W.; Luxhøj, James T.; Aven, Terje

    2012-01-01

    A model is described that determines an optimal inspection and maintenance scheme for a deteriorating unit with a stochastic degradation process with independent and stationary increments and for which the parameters are uncertain. This model and resulting maintenance plans offers some distinct benefits compared to prior research because the uncertainty of the degradation process is accommodated by a Bayesian approach and two new safety constraints have been applied to the problem: (1) with a given subjective probability (degree of belief), the limiting relative frequency of one or more failures during a fixed time interval is bounded; or (2) the subjective probability of one or more failures during a fixed time interval is bounded. In the model, the parameter(s) of a condition-based inspection scheduling function and a preventive replacement threshold are jointly optimized upon each replacement and inspection such as to minimize the expected long run cost per unit of time, but also considering one of the specified safety constraints. A numerical example is included to illustrate the effect of imposing each of the two different safety constraints.

  9. Applying Data Envelopment Analysis and Grey Model for the Productivity Evaluation of Vietnamese Agroforestry Industry

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2016-11-01

    Full Text Available Agriculture and forestry play important roles in Vietnam, particularly as they contribute to the creation of food, conservation of forest resources, and improvement of soil fertility. Therefore, understanding the performances of relevant enterprises in this field contributes to the sustainable development of this country’s agroforestry industry. This research proposes a hybrid model, which includes a grey model (GM and a Malmquist productivity index (MPI, to assess the performances of Vietnamese agroforestry enterprises over several time periods. After collecting the data of selected input and output variables for 10 Vietnam agroforestry enterprises in the period of 2011–2014, GM is used to forecast the future values of these input and output variables for the 10 agroforestry enterprises in 2015 and 2016. Following the results of GM, the MPI is used to measure the performance of these enterprises. The MPI scores showed some enterprises will become more efficient, while others will become less efficient. The proposed model gives past–present–future insights in order for decision-makers to sustain agroforestry development in Vietnam. This hybrid approach can be applied to performance analysis of other industries as well.

  10. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  11. Regional LLRW [low-level radioactive waste] processing alternatives applying the DOE REGINALT systems analysis model

    International Nuclear Information System (INIS)

    Beers, G.H.

    1987-01-01

    The DOE Low-Level Waste Management Progam has developed a computer-based decision support system of models that may be used by nonprogrammers to evaluate a comprehensive approach to commercial low-level radioactive waste (LLRW) management. REGINALT (Regional Waste Management Alternatives Analysis Model) implementation will be described as the model is applied to a hypothetical regional compact for the purpose of examining the technical and economic potential of two waste processing alternaties. Using waste from a typical regional compact, two specific regional waste processing centers will be compared for feasibility. Example 1 will assume will assume that a regional supercompaction facility is being developed for the region. Example 2 will assume that a regional facility with both supercompation and incineration is specified. Both examples will include identical disposal facilities, except that capacity may differ due to variation in volume reduction achieved. The two examples will be compared with regard to volume reduction achieved, estimated occupational exposure for the processing facilities, and life cylcle costs per generated unit waste. A base case will also illustrate current disposal practices. The results of the comparisons will be evaluated, and other steps, if necessary, for additional decision support will be identified

  12. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Heo, Gyun Young; Kang, Hyun Gook; Son, Han Seong

    2014-01-01

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility

  13. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    Energy Technology Data Exchange (ETDEWEB)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik [SCKCEN, Mol (Belgium). PISA

    2006-09-15

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues

  14. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    International Nuclear Information System (INIS)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik

    2006-01-01

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues that are

  15. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  16. Quantum correlated cluster mean-field theory applied to the transverse Ising model.

    Science.gov (United States)

    Zimmer, F M; Schmidt, M; Maziero, Jonas

    2016-06-01

    Mean-field theory (MFT) is one of the main available tools for analytical calculations entailed in investigations regarding many-body systems. Recently, there has been a surge of interest in ameliorating this kind of method, mainly with the aim of incorporating geometric and correlation properties of these systems. The correlated cluster MFT (CCMFT) is an improvement that succeeded quite well in doing that for classical spin systems. Nevertheless, even the CCMFT presents some deficiencies when applied to quantum systems. In this article, we address this issue by proposing the quantum CCMFT (QCCMFT), which, in contrast to its former approach, uses general quantum states in its self-consistent mean-field equations. We apply the introduced QCCMFT to the transverse Ising model in honeycomb, square, and simple cubic lattices and obtain fairly good results both for the Curie temperature of thermal phase transition and for the critical field of quantum phase transition. Actually, our results match those obtained via exact solutions, series expansions or Monte Carlo simulations.

  17. Applying an Archetype-Based Approach to Electroencephalography/Event-Related Potential Experiments in the EEGBase Resource.

    Science.gov (United States)

    Papež, Václav; Mouček, Roman

    2017-01-01

    The purpose of this study is to investigate the feasibility of applying openEHR (an archetype-based approach for electronic health records representation) to modeling data stored in EEGBase, a portal for experimental electroencephalography/event-related potential (EEG/ERP) data management. The study evaluates re-usage of existing openEHR archetypes and proposes a set of new archetypes together with the openEHR templates covering the domain. The main goals of the study are to (i) link existing EEGBase data/metadata and openEHR archetype structures and (ii) propose a new openEHR archetype set describing the EEG/ERP domain since this set of archetypes currently does not exist in public repositories. The main methodology is based on the determination of the concepts obtained from EEGBase experimental data and metadata that are expressible structurally by the openEHR reference model and semantically by openEHR archetypes. In addition, templates as the third openEHR resource allow us to define constraints over archetypes. Clinical Knowledge Manager (CKM), a public openEHR archetype repository, was searched for the archetypes matching the determined concepts. According to the search results, the archetypes already existing in CKM were applied and the archetypes not existing in the CKM were newly developed. openEHR archetypes support linkage to external terminologies. To increase semantic interoperability of the new archetypes, binding with the existing odML electrophysiological terminology was assured. Further, to increase structural interoperability, also other current solutions besides EEGBase were considered during the development phase. Finally, a set of templates using the selected archetypes was created to meet EEGBase requirements. A set of eleven archetypes that encompassed the domain of experimental EEG/ERP measurements were identified. Of these, six were reused without changes, one was extended, and four were newly created. All archetypes were arranged in the

  18. Applying the competence-based approach to management in the aerospace industry

    OpenAIRE

    Arpentieva Mariam; Duvalina Olga; Braitseva Svetlana; Gorelova Irina; Rozhnova Anna

    2018-01-01

    Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource ...

  19. A Modeling Approach for Marine Observatory

    Directory of Open Access Journals (Sweden)

    Charbel Geryes Aoun

    2015-02-01

    Full Text Available Infrastructure of Marine Observatory (MO is an UnderWater Sensor Networks (UW-SN to perform collaborative monitoring tasks over a given area. This observation should take into consideration the environmental constraints since it may require specific tools, materials and devices (cables, servers, etc.. The logical and physical components that are used in these observatories provide data exchanged between the various devices of the environment (Smart Sensor, Data Fusion. These components provide new functionalities or services due to the long period running of the network. In this paper, we present our approach in extending the modeling languages to include new domain- specific concepts and constraints. Thus, we propose a meta-model that is used to generate a new design tool (ArchiMO. We illustrate our proposal with an example from the MO domain on object localization with several acoustics sensors. Additionally, we generate the corresponding simulation code for a standard network simulator using our self-developed domain-specific model compiler. Our approach helps to reduce the complexity and time of the design activity of a Marine Observatory. It provides a way to share the different viewpoints of the designers in the MO domain and obtain simulation results to estimate the network capabilities.

  20. Practical Findings from Applying the PSD Model for Evaluating Software Design Specifications

    Science.gov (United States)

    Räisänen, Teppo; Lehto, Tuomas; Oinas-Kukkonen, Harri

    This paper presents practical findings from applying the PSD model to evaluating the support for persuasive features in software design specifications for a mobile Internet device. On the one hand, our experiences suggest that the PSD model fits relatively well for evaluating design specifications. On the other hand, the model would benefit from more specific heuristics for evaluating each technique to avoid unnecessary subjectivity. Better distinction between the design principles in the social support category would also make the model easier to use. Practitioners who have no theoretical background can apply the PSD model to increase the persuasiveness of the systems they design. The greatest benefit of the PSD model for researchers designing new systems may be achieved when it is applied together with a sound theory, such as the Elaboration Likelihood Model. Using the ELM together with the PSD model, one may increase the chances for attitude change.

  1. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  2. An applied artificial intelligence approach towards assessing building performance simulation tools

    Energy Technology Data Exchange (ETDEWEB)

    Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)

    2008-07-01

    With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)

  3. Supervised Learning and Knowledge-Based Approaches Applied to Biomedical Word Sense Disambiguation.

    Science.gov (United States)

    Antunes, Rui; Matos, Sérgio

    2017-12-13

    Word sense disambiguation (WSD) is an important step in biomedical text mining, which is responsible for assigning an unequivocal concept to an ambiguous term, improving the accuracy of biomedical information extraction systems. In this work we followed supervised and knowledge-based disambiguation approaches, with the best results obtained by supervised means. In the supervised method we used bag-of-words as local features, and word embeddings as global features. In the knowledge-based method we combined word embeddings, concept textual definitions extracted from the UMLS database, and concept association values calculated from the MeSH co-occurrence counts from MEDLINE articles. Also, in the knowledge-based method, we tested different word embedding averaging functions to calculate the surrounding context vectors, with the goal to give more importance to closest words of the ambiguous term. The MSH WSD dataset, the most common dataset used for evaluating biomedical concept disambiguation, was used to evaluate our methods. We obtained a top accuracy of 95.6 % by supervised means, while the best knowledge-based accuracy was 87.4 %. Our results show that word embedding models improved the disambiguation accuracy, proving to be a powerful resource in the WSD task.

  4. Probabilistic approaches applied to damage and embrittlement of structural materials in nuclear power plants

    International Nuclear Information System (INIS)

    Vincent, L.

    2012-01-01

    The present study deals with the long-term mechanical behaviour and damage of structural materials in nuclear power plants. An experimental way is first followed to study the thermal fatigue of austenitic stainless steels with a focus on the effects of mean stress and bi-axiality. Furthermore, the measurement of displacement fields by Digital Image Correlation techniques has been successfully used to detect early crack initiation during high cycle fatigue tests. A probabilistic model based on the shielding zones surrounding existing cracks is proposed to describe the development of crack networks. A more numeric way is then followed to study the embrittlement consequences of the irradiation hardening of the bainitic steel constitutive of nuclear pressure vessels. A crystalline plasticity law, developed in agreement with lower scale results (Dislocation Dynamics), is introduced in a Finite Element code in order to run simulations on aggregates and obtain the distributions of the maximum principal stress inside a Representative Volume Element. These distributions are then used to improve the classical Local Approach to Fracture which estimates the probability for a microstructural defect to be loaded up to a critical level. (author) [fr

  5. A comparison of economic evaluation models as applied to geothermal energy technology

    Science.gov (United States)

    Ziman, G. M.; Rosenberg, L. S.

    1983-01-01

    Several cost estimation and financial cash flow models have been applied to a series of geothermal case studies. In order to draw conclusions about relative performance and applicability of these models to geothermal projects, the consistency of results was assessed. The model outputs of principal interest in this study were net present value, internal rate of return, or levelized breakeven price. The models used were VENVAL, a venture analysis model; the Geothermal Probabilistic Cost Model (GPC Model); the Alternative Power Systems Economic Analysis Model (APSEAM); the Geothermal Loan Guarantee Cash Flow Model (GCFM); and the GEOCOST and GEOCITY geothermal models. The case studies to which the models were applied include a geothermal reservoir at Heber, CA; a geothermal eletric power plant to be located at the Heber site; an alcohol fuels production facility to be built at Raft River, ID; and a direct-use, district heating system in Susanville, CA.

  6. A multiscale approach for modeling atherosclerosis progression.

    Science.gov (United States)

    Exarchos, Konstantinos P; Carpegianni, Clara; Rigas, Georgios; Exarchos, Themis P; Vozzi, Federico; Sakellarios, Antonis; Marraccini, Paolo; Naka, Katerina; Michalis, Lambros; Parodi, Oberdan; Fotiadis, Dimitrios I

    2015-03-01

    Progression of atherosclerotic process constitutes a serious and quite common condition due to accumulation of fatty materials in the arterial wall, consequently posing serious cardiovascular complications. In this paper, we assemble and analyze a multitude of heterogeneous data in order to model the progression of atherosclerosis (ATS) in coronary vessels. The patient's medical record, biochemical analytes, monocyte information, adhesion molecules, and therapy-related data comprise the input for the subsequent analysis. As indicator of coronary lesion progression, two consecutive coronary computed tomography angiographies have been evaluated in the same patient. To this end, a set of 39 patients is studied using a twofold approach, namely, baseline analysis and temporal analysis. The former approach employs baseline information in order to predict the future state of the patient (in terms of progression of ATS). The latter is based on an approach encompassing dynamic Bayesian networks whereby snapshots of the patient's status over the follow-up are analyzed in order to model the evolvement of ATS, taking into account the temporal dimension of the disease. The quantitative assessment of our work has resulted in 93.3% accuracy for the case of baseline analysis, and 83% overall accuracy for the temporal analysis, in terms of modeling and predicting the evolvement of ATS. It should be noted that the application of the SMOTE algorithm for handling class imbalance and the subsequent evaluation procedure might have introduced an overestimation of the performance metrics, due to the employment of synthesized instances. The most prominent features found to play a substantial role in the progression of the disease are: diabetes, cholesterol and cholesterol/HDL. Among novel markers, the CD11b marker of leukocyte integrin complex is associated with coronary plaque progression.

  7. A coordination chemistry approach for modeling trace element adsorption

    International Nuclear Information System (INIS)

    Bourg, A.C.M.

    1986-01-01

    The traditional distribution coefficient, Kd, is highly dependent on the water chemistry and the surface properties of the geological system being studied and is therefore quite inappropriate for use in predictive models. Adsorption, one of the many processes included in Kd values, is described here using a coordination chemistry approach. The concept of adsorption of cationic trace elements by solid hydrous oxides can be applied to natural solids. The adsorption process is thus understood in terms of a classical complexation leading to the formation of surface (heterogeneous) ligands. Applications of this concept to some freshwater, estuarine and marine environments are discussed. (author)

  8. Stability of Rotor Systems: A Complex Modelling Approach

    DEFF Research Database (Denmark)

    Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

    1996-01-01

    with the results of the classical approach using Rayleighquotients. Several rotor systems are tested: a simple Laval rotor, a Laval rotor with additional elasticity and damping in thr bearings, and a number of rotor systems with complex symmetric 4x4 randomly generated matrices.......A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...

  9. Modelling and simulating retail management practices: a first approach

    OpenAIRE

    Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris

    2010-01-01

    Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...

  10. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Directory of Open Access Journals (Sweden)

    J. Liebert

    2012-09-01

    Full Text Available Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with

  11. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  12. Model approach brings multi-level success.

    Science.gov (United States)

    Howell, Mark

    2012-08-01

    n an article that first appeared in US magazine, Medical Construction & Design, Mark Howell, senior vice-president of Skanska USA Building, based in Seattle, describes the design and construction of a new nine-storey, 350,000 ft2 extension to the Good Samaritan Hospital in Puyallup, Washington state. He explains how the use of an Integrated Project Delivery (IPD) approach by the key players, and extensive use of building information modelling (BIM), combined to deliver a healthcare facility that he believes should meet the needs of patients, families, and the clinical care team, 'well into the future'.

  13. A Single, Continuously Applied Control Policy for Modeling Reaching Movements with and without Perturbation.

    Science.gov (United States)

    Li, Zhe; Mazzoni, Pietro; Song, Sen; Qian, Ning

    2018-02-01

    It has been debated whether kinematic features, such as the number of peaks or decomposed submovements in a velocity profile, indicate the number of discrete motor impulses or result from a continuous control process. The debate is particularly relevant for tasks involving target perturbation, which can alter movement kinematics. To simulate such tasks, finite-horizon models require two preset movement durations to compute two control policies before and after the perturbation. Another model employs infinite- and finite-horizon formulations to determine, respectively, movement durations and control policies, which are updated every time step. We adopted an infinite-horizon optimal feedback control model that, unlike previous approaches, does not preset movement durations or use multiple control policies. It contains both control-dependent and independent noises in system dynamics, state-dependent and independent noises in sensory feedbacks, and different delays and noise levels for visual and proprioceptive feedbacks. We analytically derived an optimal solution that can be applied continuously to move an effector toward a target regardless of whether, when, or where the target jumps. This single policy produces different numbers of peaks and "submovements" in velocity profiles for different conditions and trials. Movements that are slower or perturbed later appear to have more submovements. The model is also consistent with the observation that subjects can perform the perturbation task even without detecting the target jump or seeing their hands during reaching. Finally, because the model incorporates Weber's law via a state representation relative to the target, it explains why initial and terminal visual feedback are, respectively, less and more effective in improving end-point accuracy. Our work suggests that the number of peaks or submovements in a velocity profile does not necessarily reflect the number of motor impulses and that the difference between

  14. A multi-region approach to modeling subsurface flow

    International Nuclear Information System (INIS)

    Gwo, J.P.; Yeh, G.T.; Wilson, G.V.

    1990-01-01

    In this approach the media are assumed to contain n pore-regions at any physical point. Each region has different pore size and hydrologic parameters. Inter-region exchange is approximated by a linear transfer process. Based on the mass balance principle, a system of equations governing the flow and mass exchange in structured or aggregated soils is derived. This system of equations is coupled through linear transfer terms representing the interchange among different pore regions. A numerical MUlti-Region Flow (MURF) model, using the Galerkin finite element method to facilitate the treatment of local and field-scale heterogeneities, is developed to solve the system of equations. A sparse matrix solver is used to solve the resulting matrix equation, which makes the application of MURF to large field problems feasible in terms of CPU time and storage limitations. MURF is first verified by applying it to a ponding infiltration problem over a hill slope, which is a single-region problem and has been previously simulated by a single-region model. Very good agreement is obtained between the results from the two different models. The MURF code is thus partially verified. It is then applied to a two-region fractured medium to investigate the effects of multi-region approach on the flow field. The results are comparable to that obtained by other investigators. (Author) (15 refs., 6 figs., tab.)

  15. Book Review: Statistics and Data with R: An Applied Approach Through Examples by Yosef Cohen and Jeremiah Y. Cohen

    OpenAIRE

    Fricker, Ronald D., Jr.

    2010-01-01

    Statistics and Data with R: An Applied Approach Through Examples is a well-written and nicely organized book that, as the title clearly states, uses applied examples done in R to illustrate and motivate statistical methods and ideas. The book is divided into three main parts. Part I: Data in Statistics and R is a brief (90 pages) introduction to R; Part II: Probability, Densities and Distributions covers the usual probability topics contained in introductory probability and statistics texts; ...

  16. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  17. Sulfur partitioning applied to LIP magmatism - A new approach for quantifying sulfur concentration in basaltic melts

    Science.gov (United States)

    Marzoli, A.; Callegaro, S.; Baker, D. R.; De Min, A.; Cavazzini, G.; Martin, W.; Renne, P. R.; Svensen, H.

    2017-12-01

    Magmatism from Large Igneous Provinces (LIPs) has often been demonstrated synchronous with mass extinctions. Prominent examples in the Phanerozoic are the end-Permian, end-Triassic and end-Cretaceous extinctions, associated with, respectively, the Siberian Traps, the CAMP and the Deccan Traps. Despite the growing body of evidence for causal and temporal links between these events, it is not yet entirely clear how a LIP can severly affect the global environment. Degassing of volatile species such as S, C and halogen compounds directly from LIP magmas, and from contact metamorphism of volatile-rich sediments heated by the intrusions appears as the most realistic mechanism. Modeling the atmospheric response to LIP gas loads requires quantitative constraints on the degassed volatiles and emission rates, but these are challenging to obtain for magmatic systems from the geologic past. We therefore propose a new method to calculate the sulfur load of basaltic melts, by measuring sulfur content in natural minerals (clinopyroxene and plagioclase) and combining it with an experimentally determined partition coefficients (KD). We measured partitioning of sulfur between crystals and melt by ion microprobe (Nordsim, Stockholm) on experimentally produced crystals and glasses. Piston cylinder experiments were performed with conditions typical of basaltic, andesitic and dacitic melts (800 or 1000 MPa; 1000°-1350°C), to constrain KD variations as a function of melt composition, oxidation state and water content. We obtained a clinopyroxene/melt sulfur KD of 0.001 for basaltic melts, which can be applied to natural continental flood basalts. Preliminary results from thoroughly-dated lava piles from the Deccan Traps and from the Siberian Traps sills confirm that most of the basalts were at or close to sulfide saturation (ca. 2000 ppm for low fO2 melts). These results can be compared with the scenario modeled by Schmidt et al. (2016) for Deccan Traps magmatism, for which sulfur from

  18. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  19. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment scale water management

    DEFF Research Database (Denmark)

    Jacosen, T.; Refsgaard, A.; Jacobsen, Brian H.

    agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied...... in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse effects of specific, localized basin water management...... plans. The paper also includes a land rent modelling approach which can be used to choose the most cost effective measures and the location of these measures. As a forerunner to the use of basin scale models in WFD basin water management plans this project demonstrates potential and limitations...

  20. Making Faces - State-Space Models Applied to Multi-Modal Signal Processing

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue

    2005-01-01

    The two main focus areas of this thesis are State-Space Models and multi modal signal processing. The general State-Space Model is investigated and an addition to the class of sequential sampling methods is proposed. This new algorithm is denoted as the Parzen Particle Filter. Furthermore...... optimizer can be applied to speed up convergence. The linear version of the State-Space Model, the Kalman Filter, is applied to multi modal signal processing. It is demonstrated how a State-Space Model can be used to map from speech to lip movements. Besides the State-Space Model and the multi modal...

  1. Modelling the progression of bird migration with conditional autoregressive models applied to ringing data.

    Science.gov (United States)

    Ambrosini, Roberto; Borgoni, Riccardo; Rubolini, Diego; Sicurella, Beatrice; Fiedler, Wolfgang; Bairlein, Franz; Baillie, Stephen R; Robinson, Robert A; Clark, Jacquie A; Spina, Fernando; Saino, Nicola

    2014-01-01

    Migration is a fundamental stage in the life history of several taxa, including birds, and is under strong selective pressure. At present, the only data that may allow for both an assessment of patterns of bird migration and for retrospective analyses of changes in migration timing are the databases of ring recoveries. We used ring recoveries of the Barn Swallow Hirundo rustica collected from 1908-2008 in Europe to model the calendar date at which a given proportion of birds is expected to have reached a given geographical area ('progression of migration') and to investigate the change in timing of migration over the same areas between three time periods (1908-1969, 1970-1990, 1991-2008). The analyses were conducted using binomial conditional autoregressive (CAR) mixed models. We first concentrated on data from the British Isles and then expanded the models to western Europe and north Africa. We produced maps of the progression of migration that disclosed local patterns of migration consistent with those obtained from the analyses of the movements of ringed individuals. Timing of migration estimated from our model is consistent with data on migration phenology of the Barn Swallow available in the literature, but in some cases it is later than that estimated by data collected at ringing stations, which, however, may not be representative of migration phenology over large geographical areas. The comparison of median migration date estimated over the same geographical area among time periods showed no significant advancement of spring migration over the whole of Europe, but a significant advancement of autumn migration in southern Europe. Our modelling approach can be generalized to any records of ringing date and locality of individuals including those which have not been recovered subsequently, as well as to geo-referenced databases of sightings of migratory individuals.

  2. A 3D Full-Stokes Calving Model Applied to a West Greenland Outlet Glacier

    Science.gov (United States)

    Todd, Joe; Christoffersen, Poul; Zwinger, Thomas; Råback, Peter; Chauché, Nolwenn; Hubbard, Alun; Toberg, Nick; Luckman, Adrian; Benn, Doug; Slater, Donald; Cowton, Tom

    2017-04-01

    Iceberg calving from outlet glaciers accounts for around half of all mass loss from both the Greenland and Antarctic ice sheets. The diverse nature of calving and its complex links to both internal dynamics and external climate make it challenging to incorporate into models of glaciers and ice sheets. Consequently, calving represents one of the most significant uncertainties in predictions of future sea level rise. Here, we present results from a new 3D full-Stokes calving model developed in Elmer/Ice and applied to Store Glacier, the second largest outlet glacier in West Greenland. The calving model implements the crevasse depth criterion, which states that calving occurs when surface and basal crevasses penetrate the full thickness of the glacier. The model also implements a new 3D rediscretization approach and a time-evolution scheme which allow the calving front to evolve realistically through time. We use the model to test Store's sensitivity to two seasonal environmental processes believed to significantly influence calving: submarine melt undercutting and ice mélange buttressing. Store Glacier discharges 13.9 km3 of ice annually, and this calving rate shows a strong seasonal trend. We aim to reproduce this seasonal trend by forcing the model with present day levels of submarine melting and ice mélange buttressing. Sensitivity to changes in these frontal processes was also investigated, by forcing the model with a) increased submarine melt rates acting over longer periods of time and b) decreased mélange buttressing force acting over a reduced period. The model displays a range of observed calving behaviour and provides a good match to the observed seasonal evolution of the Store's terminus. The results indicate that ice mélange is the primary driver of the observed seasonal advance of the terminus and the associated seasonal variation in calving rate. The model also demonstrates a significant influence from submarine melting on calving rate. The results

  3. Applying the competence-based approach to management in the aerospace industry

    Directory of Open Access Journals (Sweden)

    Arpentieva Mariam

    2018-01-01

    Full Text Available Problems of management in aerospace manufacturing are similar to those we observe in other sectors, the main of which is the flattening of strategic management. The main reason lies in the attitude towards human resource of the organization. In the aerospace industry employs 250 thousand people, who need individual approach. The individual approach can offer competence-based approach to management. The purpose of the study is proof of the benefits of the competency approach to human resource management in context strategic management of the aerospace organization. To achieve this goal it is possible to obtain the method of comparative analysis. The article compares two approaches to personnel management. The transition to competence-based human resource management means (a a different understanding of the object of management; (b involvement in all functions of human resource management «knowledge – skills – abilities» of the employee; (c to change the approach to strategic management aerospace industry.

  4. Neural networks-based modeling applied to a process of heavy metals removal from wastewaters.

    Science.gov (United States)

    Suditu, Gabriel D; Curteanu, Silvia; Bulgariu, Laura

    2013-01-01

    This article approaches the problem of environment pollution with heavy metals from disposal of industrial wastewaters, namely removal of these metals by means of biosorbents, particularly with Romanian peat (from Poiana Stampei). The study is carried out by simulation using feed-forward and modular neural networks with one or two hidden layers, pursuing the influence of certain operating parameters (metal nature, sorbent dose, pH, temperature, initial concentration of metal ion, contact time) on the amount of metal ions retained on the unit mass of sorbent. In neural network modeling, a consistent data set was used, including five metals: lead, mercury, cadmium, nickel and cobalt, the quantification of the metal nature being done by its electronegativity. Even if based on successive trials, the method of designing neural models was systematically conducted, recording and comparing the errors obtained with different types of neural networks, having various numbers of hidden layers and neurons, number of training epochs, or using various learning methods. The errors with values under 5% make clear the efficiency of the applied method.

  5. Applying Interpretive Structural Modeling to Cost Overruns in Construction Projects in the Sultanate of Oman

    Directory of Open Access Journals (Sweden)

    K. Alzebdeh

    2015-06-01

    Full Text Available Cost overruns in construction projects are a problem faced by project managers, engineers, and clients throughout the Middle East.  Globally, several studies in the literature have focused on identifying the causes of these overruns and used statistical methods to rank them according to their impacts. None of these studies have considered the interactions among these factors. This paper examines interpretive structural modelling (ISM as a viable technique for modelling complex interactions among factors responsible for cost overruns in construction projects in the Sultanate of Oman. In particular, thirteen interrelated factors associated with cost overruns were identified, along with their contextual interrelationships. Application of ISM leads to organizing these factors in a hierarchical structure which effectively demonstrates their interactions in a simple way. Four factors were found to be at the root of cost overruns: instability of the US dollar, changes in governmental regulations, faulty cost estimation, and poor coordination among projects’ parties. Taking appropriate actions to minimize the influence of these factors can ultimately lead to better control of future project costs. Thisstudy is of value to managers and decision makers because it provides a powerful yet very easy to apply approach for investigating the problem of cost overruns and other similar issues.

  6. THE 3C COOPERATION MODEL APPLIED TO THE CLASSICAL REQUIREMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vagner Luiz Gava

    2012-08-01

    Full Text Available Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users’ workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system

  7. Engineering of centrifugal dust-collectors based on parallel comparing tests applying computer modelling

    Science.gov (United States)

    Bulygin, Y. I.; Koronchik, D. A.; Abuzyarov, A. A.

    2015-09-01

    Currently researchers are giving serious consideration to studying questions, related to issues of atmosphere protection, in particular, studying of new construction of gas-cleaning SPM cyclonic devices effectivity. Engineering new devices is impossible without applying mathematical model methods, computer modeling and making physical models of studying processes due nature tests.

  8. Dynamical properties of the Penna aging model applied to the population of wolves

    Science.gov (United States)

    Makowiec, Danuta

    1997-02-01

    The parameters of th Penna bit-string model of aging of biological systems are systematically tested to better understand the model itself as well as the results arising from applying this model to studies of the development of the stationary population of Alaska wolves.

  9. School Food Environment Promotion Program: Applying the Socio-ecological Approach

    Directory of Open Access Journals (Sweden)

    Fatemeh Bakhtari Aghdam

    2018-01-01

    Full Text Available Background Despite of healthy nutrition recommendations have been offered in recent decades, researches show an increasing rate of unhealthy junk food consumption among primary school children. The aim of this study was to investigate the effects of health promotion intervention on the school food buffets and the changes in nutritional behaviors of the students. Materials and Methods In this Quasi-interventional study, eight schools agreed to participate in Tabriz city, Iran. The schools were randomly selected and divided into an intervention and a control group, and a pretest was given to both groups. A four weeks interventional program was conducted in eight randomly selected schools of the city based on the socio-ecological model. A check list was designed for the assessment of food items available at the schools’ buffets, a 60-item semi-quantitative food frequency questionnaire (FFQ was used to assess the rate of food consumption and energy intake. Results evaluation and practice were analyzed using the Wilcoxon, Mann Whitney-U and Chi-square tests. Results The findings revealed reduction in the intervention group between before and after intervention with regard the range of junk food consumption, except for the sweets consumption. The number of junk foods provided in the schools buffets reduced in the intervention group. After the intervention on the intervention group significant decreases were found in the intake of energy, fat and saturated fatty acids compared to the control group (p = 0.00.   Conclusion In order to design effective school food environment promotion programs, school healthcare providers should consider multifaceted approaches.

  10. Modeling segregated in- situ combustion processes through a vertical displacement model applied to a Colombian field

    International Nuclear Information System (INIS)

    Guerra Aristizabal, Jose Julian; Grosso Vargas, Jorge Luis

    2005-01-01

    Recently it has been proposed the incorporation of horizontal well technologies in thermal EOR processes like the in situ combustion process (ISC). This has taken to the conception of new recovery mechanisms named here as segregated in-situ combustion processes, which are conventional in-situ combustion process with a segregated flow component. Top/Down combustion, Combustion Override Split-production Horizontal-well and Toe-to-Heel Air Injection are three of these processes, which incorporate horizontal producers and gravity drainage phenomena. When applied to thick reservoirs a process of this nature could be reasonably modeled under concepts of conventional in-situ combustion and Crestal Gas injection, especially for heavy oils mobile at reservoir conditions. A process of this nature has been studied through an analytic model conceived for the particular conditions of the Castilla field, a homogeneous thick anticline structure containing high mobility heavy oil, which seems to be an excellent candidate for the application of these technologies

  11. A modular approach to inverse modelling of a district heating facility with seasonal thermal energy storage

    DEFF Research Database (Denmark)

    Tordrup, Karl Woldum; Poulsen, Uffe Vestergaard; Nielsen, Carsten

    2017-01-01

    We use a modular approach to develop a TRNSYS model for a district heating facility by applying inverse modelling to one year of operational data for individual components. We assemble the components into a single TRNSYS model for the full system using the accumulation tanks as a central hub...

  12. A modular approach to inverse modelling of a district heating facility with seasonal thermal energy storage

    DEFF Research Database (Denmark)

    Tordrup, Karl Woldum; Poulsen, Uffe Vestergaard; Nielsen, Carsten

    2017-01-01

    We use a modular approach to develop a TRNSYS model for a district heating facility by applying inverse modelling to one year of operational data for individual components. We assemble the components into a single TRNSYS model for the full system using the accumulation tanks as a central hub conn...

  13. Carbonate rock depositional models: A microfacies approach

    Energy Technology Data Exchange (ETDEWEB)

    Carozzi, A.V.

    1988-01-01

    Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.

  14. Jugular valve function and petrosal sinuses pressure: a computational model applied to sudden sensorineural hearing loss

    Directory of Open Access Journals (Sweden)

    Mirko Tessari

    2017-04-01

    Full Text Available Reports of extra-cranial venous outflow disturbances have recently been linked to sudden sensorineural hearing loss (SSNHL. Aims of the present study are: i to quantify, with mathematical model, the impact of jugular valve function on the pressure of the superior and inferior petrosal sinuses (SPS, IPS and the main auricolar veins; ii to verify the feasibility of the application of mathematical model in the clinical setting in terms of consistency respect to the usual measures of SSNHL outcome. Extra-cranial venous outflow and post analysis were respectively blindly assessed by echo colour-Doppler (ECD and a validated mathematical model for the human circulation. The pilot study was conducted on 1 healthy control and in a group of 4 patients with different outcome of SSNHL. The main finding was the significant increased pressure calculated in the SPS and IPS of patients with ipsilateral jugular obstruction due to not mobile valve leaflets (6.55 mmHg, respect to the other subjects without extracranial complete obstruction (6.01 mmHg, P=0.0006. Moreover, we demonstrated an inverted correlation between the extrapolated pressure values in the SPS/IPS and the mean flow measured in the correspondent internal jugular vein (r= –0.87773; r-squared= 0.7697; P=0.0009. The proposed mathematical model can be applied to venous extra-cranial ECD investigation in order to derive novel clinical information on the drainage of the inner ear. Such clinical information seems to provide coherent parameters potentially capable to drive the prognosis. This innovative approach was proven to be feasible by the present pilot investigation and warrants further studies with an increased sample of patients.

  15. An Optimisation Approach Applied to Design the Hydraulic Power Supply for a Forklift Truck

    DEFF Research Database (Denmark)

    Pedersen, Henrik Clemmensen; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    -level optimisation approach, and is in the current paper exemplified through the design of the hydraulic power supply for a forklift truck. The paper first describes the prerequisites for the method and then explains the different steps in the approach to design the hydraulic system. Finally the results...

  16. Ecotoxicological modelling of cosmetics for aquatic organisms: A QSTR approach.

    Science.gov (United States)

    Khan, K; Roy, K

    2017-07-01

    In this study, externally validated quantitative structure-toxicity relationship (QSTR) models were developed for toxicity of cosmetic ingredients on three different ecotoxicologically relevant organisms, namely Pseudokirchneriella subcapitata, Daphnia magna and Pimephales promelas following the OECD guidelines. The final models were developed by partial least squares (PLS) regression technique, which is more robust than multiple linear regression. The obtained model for P. subcapitata shows that molecular size and complexity have significant impacts on the toxicity of cosmetics. In case of P. promelas and D. magna, we found that the largest contribution to the toxicity was shown by hydrophobicity and van der Waals surface area, respectively. All models were validated using both internal and test compounds employing multiple strategies. For each QSTR model, applicability domain studies were also performed using the "Distance to Model in X-space" method. A comparison was made with the ECOSAR predictions in order to prove the good predictive performances of our developed models. Finally, individual models were applied to predict toxicity for an external set of 596 personal care products having no experimental data for at least one of the endpoints, and the compounds were ranked based on a decreasing order of toxicity using a scaling approach.

  17. Species distribution models for crop pollination: a modelling framework applied to Great Britain.

    Science.gov (United States)

    Polce, Chiara; Termansen, Mette; Aguirre-Gutiérrez, Jesus; Boatman, Nigel D; Budge, Giles E; Crowe, Andrew; Garratt, Michael P; Pietravalle, Stéphane; Potts, Simon G; Ramirez, Jorge A; Somerwill, Kate E; Biesmeijer, Jacobus C

    2013-01-01

    Insect pollination benefits over three quarters of the world's major crops. There is growing concern that observed declines in pollinators may impact on production and revenues from animal pollinated crops. Knowing the distribution of pollinators is therefore crucial for estimating their availability to pollinate crops; however, in general, we have an incomplete knowledge of where these pollinators occur. We propose a method to predict geographical patterns of pollination service to crops, novel in two elements: the use of pollinator records rather than expert knowledge to predict pollinator occurrence, and the inclusion of the managed pollinator supply. We integrated a maximum entropy species distribution model (SDM) with an existing pollination service model (PSM) to derive the availability of pollinators for crop pollination. We used nation-wide records of wild and managed pollinators (honey bees) as well as agricultural data from Great Britain. We first calibrated the SDM on a representative sample of bee and hoverfly crop pollinator species, evaluating the effects of different settings on model performance and on its capacity to identify the most important predictors. The importance of the different predictors was better resolved by SDM derived from simpler functions, with consistent results for bees and hoverflies. We then used the species distributions from the calibrated model to predict pollination service of wild and managed pollinators, using field beans as a test case. The PSM allowed us to spatially characterize the contribution of wild and managed pollinators and also identify areas potentially vulnerable to low pollination service provision, which can help direct local scale interventions. This approach can be extended to investigate geographical mismatches between crop pollination demand and the availability of pollinators, resulting from environmental change or policy scenarios.

  18. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  19. "Teamwork in hospitals": a quasi-experimental study protocol applying a human factors approach.

    Science.gov (United States)

    Ballangrud, Randi; Husebø, Sissel Eikeland; Aase, Karina; Aaberg, Oddveig Reiersdal; Vifladt, Anne; Berg, Geir Vegard; Hall-Lord, Marie Louise

    2017-01-01

    Effective teamwork and sufficient communication are critical components essential to patient safety in today's specialized and complex healthcare services. Team training is important for an improved efficiency in inter-professional teamwork within hospitals, however the scientific rigor of studies must be strengthen and more research is required to compare studies across samples, settings and countries. The aims of the study are to translate and validate teamwork questionnaires and investigate healthcare personnel's perception of teamwork in hospitals (Part 1). Further to explore the impact of an inter-professional teamwork intervention in a surgical ward on structure, process and outcome (Part 2). To address the aims, a descriptive, and explorative design (Part 1), and a quasi-experimental interventional design will be applied (Part 2). The study will be carried out in five different hospitals (A-E) in three hospital trusts in Norway. Frontline healthcare personnel in Hospitals A and B, from both acute and non-acute departments, will be invited to respond to three Norwegian translated teamwork questionnaires (Part 1). An inter-professional teamwork intervention in line with the TeamSTEPPS recommend Model of Change will be implemented in a surgical ward at Hospital C. All physicians, registered nurses and assistant nurses in the intervention ward and two control wards (Hospitals D and E) will be invited to to survey their perception of teamwork, team decision making, safety culture and attitude towards teamwork before intervention and after six and 12 months. Adult patients admitted to the intervention surgical unit will be invited to survey their perception of quality of care during their hospital stay before intervention and after six and 12 month. Moreover, anonymous patient registry data from local registers and data from patients' medical records will be collected (Part 2). This study will help to understand the impact of an inter-professional teamwork

  20. What can the treatment of Parkinson's disease learn from dementia care; applying a bio-psycho-social approach to Parkinson's disease.

    Science.gov (United States)

    Gibson, Grant

    2017-12-01

    Within contemporary medical practice, Parkinson's disease (PD) is treated using a biomedical, neurological approach, which although bringing numerous benefits can struggle to engage with how people with PD experience the disease. A bio-psycho-social approach has not yet been established in PD; however, bio-psycho-social approaches adopted within dementia care practice could bring significant benefit to PD care. This paper summarises existing bio-psycho-social models of dementia care and explores how these models could also usefully be applied to care for PD. Specifically, this paper adapts the bio-psycho-social model for dementia developed by Spector and Orrell (), to suggest a bio-psycho-social model, which could be used to inform routine care in PD. Drawing on the biopsychosocial model of Dementia put forward by Spector and Orrell (), this paper explores the application of a bio-psycho-social model of PD. This model conceptualises PD as a trajectory, in which several interrelated fixed and tractable factors influence both PD's symptomology and the various biological and psychosocial challenges individuals will face as their disease progresses. Using an individual case study, this paper then illustrates how such a model can assist clinicians in identifying suitable interventions for people living with PD. This model concludes by discussing how a bio-psycho-social model could be used as a tool in PD's routine care. The model also encourages the development of a theoretical and practical framework for the future development of the role of the PD specialist nurse within routine practice. A biopsychosocial approach to Parkinson's Disease provides an opportunity to move towards a holistic model of care practice which addresses a wider range of factors affecting people living with PD. The paper puts forward a framework through which PD care practice can move towards a biopsychosocial perspective. PD specialist nurses are particularly well placed to adopt such a model

  1. 3-D thermal modelling applied to stress-induced anisotropy of thermal conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Pron, H.; Bissieux, C. [Universite de Reims, Unite de Thermique et Analyse Physique, EA 2061, Laboratoire de Thermophysique (URCA/UTAP/LTP), UFR Sciences, Moulin de la Housse, B.P 1039, 51687 cedex 2, Reims (France)

    2004-12-01

    The present work consists in the development of a three-dimensional model of heat diffusion in orthotropic media, based on numerical Fourier transforms, and taking into account the extent of the source. This model has been applied, together with a Gauss-Newton parameter estimation procedure, to identify the components of the conductivity tensor of a steel bar under uniaxial loading. Few percent variations of the conductivity components have been observed for applied stresses remaining in the elastic domain. (authors)

  2. Comparison of Science-Technology-Society Approach and Textbook Oriented Instruction on Students' Abilities to Apply Science Concepts

    Science.gov (United States)

    Kapici, Hasan Ozgur; Akcay, Hakan; Yager, Robert E.

    2017-01-01

    It is important for students to learn concepts and using them for solving problems and further learning. Within this respect, the purpose of this study is to investigate students' abilities to apply science concepts that they have learned from Science-Technology-Society based approach or textbook oriented instruction. Current study is based on…

  3. Characteristics of Computational Thinking about the Estimation of the Students in Mathematics Classroom Applying Lesson Study and Open Approach

    Science.gov (United States)

    Promraksa, Siwarak; Sangaroon, Kiat; Inprasitha, Maitree

    2014-01-01

    The objectives of this research were to study and analyze the characteristics of computational thinking about the estimation of the students in mathematics classroom applying lesson study and open approach. Members of target group included 4th grade students of 2011 academic year of Choomchon Banchonnabot School. The Lesson plan used for data…

  4. Increasing oral absorption of polar neuraminidase inhibitors: a prodrug transporter approach applied to oseltamivir analogue.

    Science.gov (United States)

    Gupta, Deepak; Varghese Gupta, Sheeba; Dahan, Arik; Tsume, Yasuhiro; Hilfinger, John; Lee, Kyung-Dall; Amidon, Gordon L

    2013-02-04

    Poor oral absorption is one of the limiting factors in utilizing the full potential of polar antiviral agents. The neuraminidase target site requires a polar chemical structure for high affinity binding, thus limiting oral efficacy of many high affinity ligands. The aim of this study was to overcome this poor oral absorption barrier, utilizing prodrug to target the apical brush border peptide transporter 1 (PEPT1). Guanidine oseltamivir carboxylate (GOCarb) is a highly active polar antiviral agent with insufficient oral bioavailability (4%) to be an effective therapeutic agent. In this report we utilize a carrier-mediated targeted prodrug approach to improve the oral absorption of GOCarb. Acyloxy(alkyl) ester based amino acid linked prodrugs were synthesized and evaluated as potential substrates of mucosal transporters, e.g., PEPT1. Prodrugs were also evaluated for their chemical and enzymatic stability. PEPT1 transport studies included [(3)H]Gly-Sar uptake inhibition in Caco-2 cells and cellular uptake experiments using HeLa cells overexpressing PEPT1. The intestinal membrane permeabilities of the selected prodrugs and the parent drug were then evaluated for epithelial cell transport across Caco-2 monolayers, and in the in situ rat intestinal jejunal perfusion model. Prodrugs exhibited a pH dependent stability with higher stability at acidic pHs. Significant inhibition of uptake (IC(50) 30-fold increase in affinity compared to GOCarb. The l-valyl prodrug exhibited significant enhancement of uptake in PEPT1/HeLa cells and compared favorably with the well-absorbed valacyclovir. Transepithelial permeability across Caco-2 monolayers showed that these amino acid prodrugs have a 2-5-fold increase in permeability as compared to the parent drug and showed that the l-valyl prodrug (P(app) = 1.7 × 10(-6) cm/s) has the potential to be rapidly transported across the epithelial cell apical membrane. Significantly, only the parent drug (GOCarb) appeared in the basolateral

  5. Approach to discover T- and B-cell antigens of intracellular pathogens applied to the design of Chlamydia trachomatis vaccines

    Science.gov (United States)

    Finco, Oretta; Frigimelica, Elisabetta; Buricchi, Francesca; Petracca, Roberto; Galli, Giuliano; Faenzi, Elisa; Meoni, Eva; Bonci, Alessandra; Agnusdei, Mauro; Nardelli, Filomena; Bartolini, Erika; Scarselli, Maria; Caproni, Elena; Laera, Donatello; Zedda, Luisanna; Skibinski, David; Giovinazzi, Serena; Bastone, Riccardo; Ianni, Elvira; Cevenini, Roberto; Grandi, Guido; Grifantini, Renata

    2011-01-01

    Natural immunity against obligate and/or facultative intracellular pathogens is usually mediated by both humoral and cellular immunity. The identification of those antigens stimulating both arms of the immune system is instrumental for vaccine discovery. Although high-throughput technologies have been applied for the discovery of antibody-inducing antigens, few examples of their application for T-cell antigens have been reported. We describe how the compilation of the immunome, here defined as the pool of immunogenic antigens inducing T- and B-cell responses in vivo, can lead to vaccine candidates against Chlamydia trachomatis. We selected 120 C. trachomatis proteins and assessed their immunogenicity using two parallel high-throughput approaches. Protein arrays were generated and screened with sera from C. trachomatis-infected patients to identify antibody-inducing antigens. Splenocytes from C. trachomatis-infected mice were stimulated with 79 proteins, and the frequency of antigen-specific CD4+/IFN-γ+ T cells was analyzed by flow cytometry. We identified 21 antibody-inducing antigens, 16 CD4+/IFN-γ+–inducing antigens, and five antigens eliciting both types of responses. Assessment of their protective activity in a mouse model of Chlamydia muridarum lung infection led to the identification of seven antigens conferring partial protection when administered with LTK63/CpG adjuvant. Protection was largely the result of cellular immunity as assessed by CD4+ T-cell depletion. The seven antigens provided robust additive protection when combined in four-antigen combinations. This study paves the way for the development of an effective anti-Chlamydia vaccine and provides a general approach for the discovery of vaccines against other intracellular pathogens. PMID:21628568

  6. Quantifying Vulnerability to Extreme Heat in Time Series Analyses: A Novel Approach Applied to Neighborhood Social Disparities under Climate Change.

    Science.gov (United States)

    Benmarhnia, Tarik; Grenier, Patrick; Brand, Allan; Fournier, Michel; Deguen, Séverine; Smargiassi, Audrey

    2015-09-22

    We propose a novel approach to examine vulnerability in the relationship between heat and years of life lost and apply to neighborhood social disparities in Montreal and Paris. We used historical data from the summers of 1990 through 2007 for Montreal and from 2004 through 2009 for Paris to estimate daily years of life lost social disparities (DYLLD), summarizing social inequalities across groups. We used Generalized Linear Models to separately estimate relative risks (RR) for DYLLD in association with daily mean temperatures in both cities. We used 30 climate scenarios of daily mean temperature to estimate future temperature distributions (2021-2050). We performed random effect meta-analyses to assess the impact of climate change by climate scenario for each city and compared the impact of climate change for the two cities using a meta-regression analysis. We show that an increase in ambient temperature leads to an increase in social disparities in daily years of life lost. The impact of climate change on DYLLD attributable to temperature was of 2.06 (95% CI: 1.90, 2.25) in Montreal and 1.77 (95% CI: 1.61, 1.94) in Paris. The city explained a difference of 0.31 (95% CI: 0.14, 0.49) on the impact of climate change. We propose a new analytical approach for estimating vulnerability in the relationship between heat and health. Our results suggest that in Paris and Montreal, health disparities related to heat impacts exist today and will increase in the future.

  7. Quantifying Vulnerability to Extreme Heat in Time Series Analyses: A Novel Approach Applied to Neighborhood Social Disparities under Climate Change

    Directory of Open Access Journals (Sweden)

    Tarik Benmarhnia

    2015-09-01

    Full Text Available Objectives: We propose a novel approach to examine vulnerability in the relationship between heat and years of life lost and apply to neighborhood social disparities in Montreal and Paris. Methods: We used historical data from the summers of 1990 through 2007 for Montreal and from 2004 through 2009 for Paris to estimate daily years of life lost social disparities (DYLLD, summarizing social inequalities across groups. We used Generalized Linear Models to separately estimate relative risks (RR for DYLLD in association with daily mean temperatures in both cities. We used 30 climate scenarios of daily mean temperature to estimate future temperature distributions (2021–2050. We performed random effect meta-analyses to assess the impact of climate change by climate scenario for each city and compared the impact of climate change for the two cities using a meta-regression analysis. Results: We show that an increase in ambient temperature leads to an increase in social disparities in daily years of life lost. The impact of climate change on DYLLD attributable to temperature was of 2.06 (95% CI: 1.90, 2.25 in Montreal and 1.77 (95% CI: 1.61, 1.94 in Paris. The city explained a difference of 0.31 (95% CI: 0.14, 0.49 on the impact of climate change. Conclusion: We propose a new analytical approach for estimating vulnerability in the relationship between heat and health. Our results suggest that in Paris and Montreal, health disparities related to heat impacts exist today and will increase in the future.

  8. Applying a Markov approach as a Lean Thinking analysis of waste elimination in a Rice Production Process

    Directory of Open Access Journals (Sweden)

    Eldon Glen Caldwell Marin

    2015-01-01

    Full Text Available The Markov Chains Model was proposed to analyze stochastic events when recursive cycles occur; for example, when rework in a continuous flow production affects the overall performance. Typically, the analysis of rework and scrap is done through a wasted material cost perspective and not from the perspective of waste capacity that reduces throughput and economic value added (EVA. Also, we can not find many cases of this application in agro-industrial production in Latin America, given the complexity of the calculations and the need for robust applications. This scientific work presents the results of a quasi-experimental research approach in order to explain how to apply DOE methods and Markov analysis in a rice production process located in Central America, evaluating the global effects of a single reduction in rework and scrap in a part of the whole line. The results show that in this case it is possible to evaluate benefits from Global Throughput and EVA perspective and not only from the saving costs perspective, finding a relationship between operational indicators and corporate performance. However, it was found that it is necessary to analyze the markov chains configuration with many rework points, also it is still relevant to take into account the effects on takt time and not only scrap´s costs.

  9. A Regional Guidebook for Applying the Hydrogeomorphic Approach to Assessing Wetland Functions of Prairie Potholes

    National Research Council Canada - National Science Library

    Gilbert, Michael C; Whited, P. M; Clairain, Jr., Ellis J; Smith, R. D

    2006-01-01

    .... The HGM Approach was initially designed to be used in the context of the Clean Water Act, Section 404 Regulatory Program, permit review to analyze project alternatives, minimize impacts, assess...

  10. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment scale water management

    DEFF Research Database (Denmark)

    Jacosen, T.; Refsgaard, A.; Jacobsen, Brian H.

    agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied......Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive...... in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse effects of specific, localized basin water management...

  11. An Approach from Bankruptcy Rules Applied to the Apportionment Problem in Proportional Electoral Systems

    Directory of Open Access Journals (Sweden)

    Joaquín Sánchez-Soriano

    2016-01-01

    Full Text Available (Discrete bankruptcy problems associated with apportionment problems have been defined. The authors studied which allocations for apportionment problems have been obtained when (discrete bankruptcy rules were applied to the associated bankruptcy problems. They have shown that the (discrete constrained equal losses rule coincides with the greatest remainder method for apportionment problems. Furthermore, new properties related to governability have been proposed for apportionment methods. Finally, several apportionment methods satisfying governability properties have been applied to the case of the Spanish Elections in 2015. (original abstract

  12. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  13. Applying the ISO 9126 Model to the Evaluation of an E-learning System in Iran

    OpenAIRE

    Hossein Pedram; Davood Karimzadegan Moghaddam; Zhaleh Asheghi

    2012-01-01

    One of the models presented in e-learning quality system field is ISO 9126 model, which applied in this research to evaluate e-learning system of Amirkabir University. This model system for evaluation, the six main variables provided that each of these variables by several other indicators was measured. Thus, the model parameters as ISO 9126 and turned the questionnaire survey among samples (120 experts and students of Amirkabir University) and the distribution were completed. Based on the re...

  14. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    Science.gov (United States)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  15. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  16. Effects of undercutting and sliding on calving: a global approach applied to Kronebreen, Svalbard

    Science.gov (United States)

    Vallot, Dorothée; Åström, Jan; Zwinger, Thomas; Pettersson, Rickard; Everett, Alistair; Benn, Douglas I.; Luckman, Adrian; van Pelt, Ward J. J.; Nick, Faezeh; Kohler, Jack

    2018-02-01

    In this paper, we study the effects of basal friction, sub-aqueous undercutting and glacier geometry on the calving process by combining six different models in an offline-coupled workflow: a continuum-mechanical ice flow model (Elmer/Ice), a climatic mass balance model, a simple subglacial hydrology model, a plume model, an undercutting model and a discrete particle model to investigate fracture dynamics (Helsinki Discrete Element Model, HiDEM). We demonstrate the feasibility of reproducing the observed calving retreat at the front of Kronebreen, a tidewater glacier in Svalbard, during a melt season by using the output from the first five models as input to HiDEM. Basal sliding and glacier motion are addressed using Elmer/Ice, while calving is modelled by HiDEM. A hydrology model calculates subglacial drainage paths and indicates two main outlets with different discharges. Depending on the discharge, the plume model computes frontal melt rates, which are iteratively projected to the actual front of the glacier at subglacial discharge locations. This produces undercutting of different sizes, as melt is concentrated close to the surface for high discharge and is more diffuse for low discharge. By testing different configurations, we show that undercutting plays a key role in glacier retreat and is necessary to reproduce observed retreat in the vicinity of the discharge locations during the melting season. Calving rates are also influenced by basal friction, through its effects on near-terminus strain rates and ice velocity.

  17. Artificial Life of Soybean Plant Growth Modeling Using Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Atris Suyantohadi

    2010-03-01

    Full Text Available The natural process on plant growth system has a complex system and it has could be developed on characteristic studied using intelligent approaches conducting with artificial life system. The approaches on examining the natural process on soybean (Glycine Max L.Merr plant growth have been analyzed and synthesized in these research through modeling using Artificial Neural Network (ANN and Lindenmayer System (L-System methods. Research aimed to design and to visualize plant growth modeling on the soybean varieties which these could help for studying botany of plant based on fertilizer compositions on plant growth with Nitrogen (N, Phosphor (P and Potassium (K. The soybean plant growth has been analyzed based on the treatments of plant fertilizer compositions in the experimental research to develop plant growth modeling. By using N, P, K fertilizer compositions, its capable result on the highest production 2.074 tons/hectares. Using these models, the simulation on artificial life for describing identification and visualization on the characteristic of soybean plant growth could be demonstrated and applied.

  18. Applying MDA to SDR for Space to Model Real-time Issues

    Science.gov (United States)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  19. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  20. Applying Research to Teacher Education: The University of Utah's Collaborative Approach. First Year Preliminary Report.

    Science.gov (United States)

    Driscoll, Amy

    In 1983, the National Institute of Education funded the Far West Laboratory for Educational Research and Development to conduct a study, Applying Research to Teacher Education (ARTE) Research Utilization in Elementary Teacher Education (RUETE). The ARTE:RUETE study's purpose is to develop preservice instruction incorporating current research…