WorldWideScience

Sample records for modelling approach applied

  1. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  2. Continuous Molecular Fields Approach Applied to Structure-Activity Modeling

    CERN Document Server

    Baskin, Igor I

    2013-01-01

    The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.

  3. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  4. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  5. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  6. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Directory of Open Access Journals (Sweden)

    Matthew J. Daigle

    2011-01-01

    Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  7. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Science.gov (United States)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  8. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  9. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  10. Comparison of various modelling approaches applied to cholera case data

    CSIR Research Space (South Africa)

    Van Den Bergh, F

    2008-06-01

    Full Text Available The application of a methodology that proposes the use of spectral methods to inform the development of statistical forecasting models for cholera case data is explored in this paper. The seasonal behaviour of the target variable (cholera cases...

  11. An Analytical Model for Learning: An Applied Approach.

    Science.gov (United States)

    Kassebaum, Peter Arthur

    A mediated-learning package, geared toward non-traditional students, was developed for use in the College of Marin's cultural anthropology courses. An analytical model for learning was used in the development of the package, utilizing concepts related to learning objectives, programmed instruction, Gestalt psychology, cognitive psychology, and…

  12. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    in the ability to assess results objectively via significance testing and other ... The focus in this paper is on the model fitting component, and not on the .... A popular way to gain insight into the dominant frequencies of a signal is to ... may involve noise generated by autoregressive processes, alternative algorithms such as.

  13. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Batstone, Damien J.; Flores Alsina, Xavier

    2015-01-01

    , the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define...... of the mineral particulate state (Xcryst) and, for calcite, have a 2nd order dependency (exponent n ¼ 2.05 ± 0.29) on thermodynamic supersaturation (s). Parameter analysis indicated that the model was more tolerant to a fast kinetic coefficient (kcryst) and so, in general, it is recommended that a large kcryst...

  14. Assessing switchability for biosimilar products: modelling approaches applied to children's growth.

    Science.gov (United States)

    Belleli, Rossella; Fisch, Roland; Renard, Didier; Woehling, Heike; Gsteiger, Sandro

    2015-01-01

    The present paper describes two statistical modelling approaches that have been developed to demonstrate switchability from the original recombinant human growth hormone (rhGH) formulation (Genotropin(®) ) to a biosimilar product (Omnitrope(®) ) in children suffering from growth hormone deficiency. Demonstrating switchability between rhGH products is challenging because the process of growth varies with the age of the child and across children. The first modelling approach aims at predicting individual height measured at several time-points after switching to the biosimilar. The second modelling approach provides an estimate of the deviation from the overall growth rate after switching to the biosimilar, which can be regarded as an estimate of switchability. The results after applying these approaches to data from a randomized clinical trial are presented. The accuracy and precision of the predictions made using the first approach and the small deviation from switchability estimated with the second approach provide sufficient evidence to conclude that switching from Genotropin(®) to Omnitrope(®) has a very small effect on growth, which is neither statistically significant nor clinically relevant.

  15. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  16. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  17. [Aquatic ecosystem modelling approach: temperature and water quality models applied to Oualidia and Nador lagoons].

    Science.gov (United States)

    Idrissi, J Lakhdar; Orbi, A; Hilmi, K; Zidane, F; Moncef, M

    2005-07-01

    The objective of this work is to develop an aquatic ecosystem and apply it on Moroccan lagoon systems. This model will keep us abreast of the yearly development of the main parameters that characterize these ecosystems while integrating all the data that have so far been acquired. Within this framework, a simulation model of the thermal system and a model of the water quality have been elaborated. These models, which have been simulated on the lagoon of Oualidia (North of Morocco) and validated on the lagoon of Nador (North West Mediterranean), permit to foresee the cycles of temperature of the surface and the parameters of the water quality (dissolved oxygen and biomass phytoplankton) by using meteorological information, specific features and in situ measurements in the studied sites. The elaborated model, called Zero-Dimensional, simulates the average conduct of the site during the time of variable states that are representatives of the studied ecosystem. This model will provide answers for the studied phenomena and is a work tool adequate for numerical simplicity.

  18. Fuel moisture content estimation: a land-surface modelling approach applied to African savannas

    Science.gov (United States)

    Ghent, D.; Spessa, A.; Kaduk, J.; Balzter, H.

    2009-04-01

    Despite the importance of fire to the global climate system, in terms of emissions from biomass burning, ecosystem structure and function, and changes to surface albedo, current land-surface models do not adequately estimate key variables affecting fire ignition and propagation. Fuel moisture content (FMC) is considered one of the most important of these variables (Chuvieco et al., 2004). Biophysical models, with appropriate plant functional type parameterisations, are the most viable option to adequately predict FMC over continental scales at high temporal resolution. However, the complexity of plant-water interactions, and the variability associated with short-term climate changes, means it is one of the most difficult fire variables to quantify and predict. Our work attempts to resolve this issue using a combination of satellite data and biophysical modelling applied to Africa. The approach we take is to represent live FMC as a surface dryness index; expressed as the ratio between the Normalised Difference Vegetation Index (NDVI) and land-surface temperature (LST). It has been argued in previous studies (Sandholt et al., 2002; Snyder et al., 2006), that this ratio displays a statistically stronger correlation to FMC than either of the variables, considered separately. In this study, simulated FMC is constrained through the assimilation of remotely sensed LST and NDVI data into the land-surface model JULES (Joint-UK Land Environment Simulator). Previous modelling studies of fire activity in Africa savannas, such as Lehsten et al. (2008), have reported significant levels of uncertainty associated with the simulations. This uncertainty is important because African savannas are among some of the most frequently burnt ecosystems and are a major source of greenhouse trace gases and aerosol emissions (Scholes et al., 1996). Furthermore, regional climate model studies indicate that many parts of the African savannas will experience drier and warmer conditions in future

  19. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-04-01

    Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.

  20. Testing of kinetic models: usefulness of the multiresponse approach as applied to chlorophyll degradation in foods

    NARCIS (Netherlands)

    Boekel, van M.A.J.S.

    1999-01-01

    Cascades of reactions, in which several reactants and products take part, frequently occur in foods. This work shows that kinetic modelling of such reactions having parameters in common is much more powerful when using a multiresponse rather than a uniresponse approach (i.e. analysing more than one

  1. Investigation of Fire Growth and Spread in a Model-Scale Railcar Using an Applied Approach

    Directory of Open Access Journals (Sweden)

    Ali Kazemipour

    2016-01-01

    Full Text Available Fire is a potential hazard in public transportation facilities such as subways or road tunnels due to its contribution to high number of deaths. To provide an insight into fire development behavior in tunnels which can serve as the basis for emergency ventilation design, model-scale railcar fire is explored numerically in this research. Fire growth and its spread are investigated by analyzing the HRR curve as the representative of fire behavior in different stages. Fire development has been predicted through a new approach using an Arrhenius-based pyrolysis model, established to predict the decomposition behavior of solid flammable materials exposed to heat flux. Using this approach, model-scale railcar fire curve is obtained and compared with experimental data. Reasonable agreement is achieved in two important stages of flashover and fully developed fire, confirming the accuracy of the presented approach. Moreover, effects of railcar material type, amount of available air, and surrounding are also discussed. Detailed illustrations of physical phenomena and flow structures have been provided and justified with experimental findings for better description of railcar fire behavior. The presented approach can be further used in other applications such as investigation of fire spread in a compartment, studying fire spread from a burning vehicle to another and reconstruction of fire incidents.

  2. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  3. Developing scenarios to assess future landslide risks: a model-based approach applied to mountainous regions

    Science.gov (United States)

    Vacquie, Laure; Houet, Thomas

    2016-04-01

    In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging

  4. Economic and ecological impacts of bioenergy crop production—a modeling approach applied in Southwestern Germany

    Directory of Open Access Journals (Sweden)

    Hans-Georg Schwarz-v. Raumer

    2017-03-01

    Full Text Available This paper considers scenarios of cultivating energy crops in the German Federal State of Baden-Württemberg to identify potentials and limitations of a sustainable bioenergy production. Trade-offs are analyzed among income and production structure in agriculture, bioenergy crop production, greenhouse gas emissions, and the interests of soil, water and species habitat protection. An integrated modelling approach (IMA was implemented coupling ecological and economic models in a model chain. IMA combines the Economic Farm Emission Model (EFEM; key input: parameter sets on farm production activities, the Environmental Policy Integrated Climate model (EPIC; key input: parameter sets on environmental cropping effects and GIS geo-processing models. EFEM is a supply model that maximizes total gross margins on farm level with simultaneous calculation of greenhouse gas emission from agriculture production. Calculations by EPIC result in estimates for soil erosion by water, nitrate leaching, Soil Organic Carbon and greenhouse gas emissions from soil. GIS routines provide land suitability analyses, scenario settings concerning nature conservation and habitat models for target species and help to enable spatial explicit results. The model chain is used to calculate scenarios representing different intensities of energy crop cultivation. To design scenarios which are detailed and in step to practice, comprehensive data research as well as fact and effect analyses were carried out. The scenarios indicate that, not in general but when considering specific farm types, energy crop share extremely increases if not restricted and leads to an increase in income. If so this leads to significant increase in soil erosion by water, nitrate leaching and greenhouse gas emissions. It has to be expected that an extension of nature conservation leads to an intensification of the remaining grassland and of the arable land, which were not part of nature conservation measures

  5. Effective site-energy model: A thermodynamic approach applied to size-mismatched alloys

    Science.gov (United States)

    Berthier, F.; Creuze, J.; Legrand, B.

    2017-06-01

    We present a novel energetic model that takes into account atomistic relaxations to describe the thermodynamic properties of AcB1 -c binary alloys. It requires the calculation of the energies on each site of a random solid solution after relaxation as a function of both the local composition and the nominal concentration. These site energies are obtained by molecular static simulations using N -body interatomic potentials derived from the second-moment approximation (SMA) of the tight-binding scheme. This new model allows us to determine the effective pair interactions (EPIs) that drive the short-range order (SRO) and to analyze the relative role of the EPIs' contribution to the mixing enthalpy, with respect to the contribution due to the lattice mismatch between the constituents. We apply this formalism to Au-Ni and Ag-Cu alloys, both of them tending to phase separate in the bulk and exhibiting a large size mismatch. Rigid-lattice Monte Carlo (MC) simulations lead to phase diagrams that are in good agreement with both those obtained by off-lattice SMA-MC simulations and the experimental ones. While the phase diagrams of Au-Ni and Ag-Cu alloys are very similar, we show that phase separation is mainly driven by the elastic contribution for Au-Ni and by the EPIs' contribution for Ag-Cu. Furthermore, for Au-Ni, the analysis of the SRO shows an inversion between the tendency to order and the tendency to phase separate as a function of the concentration.

  6. Applying of an Ontology based Modeling Approach to Cultural Heritage Systems

    Directory of Open Access Journals (Sweden)

    POPOVICI, D.-M.

    2011-08-01

    Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.

  7. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  8. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    Science.gov (United States)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool

  9. A comparison of various modelling approaches applied to Cholera case data

    Directory of Open Access Journals (Sweden)

    F van den Bergh

    2008-06-01

    Full Text Available The application of a methodology that proposes the use of spectral methods to inform the development of statistical forecasting models for cholera case data is explored in this paper. The seasonal behaviour of the target variable (cholera cases is analysed using singular spectrum analysis followed by spectrum estimation using the maximum entropy method. This seasonal behaviour is compared to that of environmental variables (rainfall and temperature. The spectral analysis is refined by means of a cross-wavelet technique, which is used to compute lead times for co-varying variables, and suggests transformations that enhance co-varying behaviour. Several statistical modelling techniques, including generalised linear models, ARIMA time series modelling, and dynamic regression are investigated for the purpose of developing a cholera cases forecast model fed by environmental variables. The analyses are demonstrated on data collected from Beira, Mozambique. Dynamic regression was found to be the preferred forecasting method for this data set.

  10. A single-column model ensemble approach applied to the TWP-ICE experiment

    Science.gov (United States)

    Davies, L.; Jakob, C.; Cheung, K.; Genio, A. Del; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; Liu, X.; Nielsen, B. J.; Petch, J.; Plant, R. S.; Singh, M. S.; Shi, X.; Song, X.; Wang, W.; Whitall, M. A.; Wolf, A.; Xie, S.; Zhang, G.

    2013-06-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  11. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Davies, Laura; Jakob, Christian; Cheung, K.; Del Genio, Anthony D.; Hill, Adrian; Hume, Timothy; Keane, R. J.; Komori, T.; Larson, Vincent E.; Lin, Yanluan; Liu, Xiaohong; Nielsen, Brandon J.; Petch, Jon C.; Plant, R. S.; Singh, M. S.; Shi, Xiangjun; Song, X.; Wang, Weiguo; Whitall, M. A.; Wolf, A.; Xie, Shaocheng; Zhang, Guang J.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimate simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.

  12. Analytical Performance Verification of FCS-MPC Applied to Power Electronic Converters: A Model Checking Approach

    DEFF Research Database (Denmark)

    Novak, Mateja; Nyman, Ulrik Mathias; Dragicevic, Tomislav

    2017-01-01

    Since the introduction of finite control set model predictive control (FCS-MPC) in power electronics the algorithm has been missing an important aspect that would speed up its implementation in industry: a simple method to verify the algorithm performance. This paper proposes to use a statistical...

  13. Regression models for categorical, count, and related variables an applied approach

    CERN Document Server

    Hoffmann, John P, Dr

    2016-01-01

    Social science and behavioral science students and researchers are often confronted with data that are categorical, count a phenomenon, or have been collected over time. Sociologists examining the likelihood of interracial marriage, political scientists studying voting behavior, criminologists counting the number of offenses people commit, health scientists studying the number of suicides across neighborhoods, and psychologists modeling mental health treatment success are all interested in outcomes that are not continuous. Instead, they must measure and analyze these events and phenomena in a

  14. Investigation of Fire Growth and Spread in a Model-Scale Railcar Using an Applied Approach

    OpenAIRE

    Ali Kazemipour; Mahyar Pourghasemi; Hossein Afshin; Bijan Farhanieh

    2016-01-01

    Fire is a potential hazard in public transportation facilities such as subways or road tunnels due to its contribution to high number of deaths. To provide an insight into fire development behavior in tunnels which can serve as the basis for emergency ventilation design, model-scale railcar fire is explored numerically in this research. Fire growth and its spread are investigated by analyzing the HRR curve as the representative of fire behavior in different stages. Fire developmen...

  15. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    baroreceptor feedback regulation of heart rate during head-up tilt. The three methods include: structured analysis of the correlation matrix, analysis via singular value decomposition followed by QR factorization, and identification of the subspace closest to the one spanned by eigenvectors of the model...... Hessian. Results showed that all three methods facilitate identification of a parameter subset. The “best” subset was obtained using the structured correlation method, though this method was also the most computationally intensive. Subsets obtained using the other two methods were easier to compute...

  16. The Best of All Possible Worlds: Applying the Model Driven Architecture Approach to a JC3IEDM OWL Ontology Modeled in UML

    Science.gov (United States)

    2014-06-01

    1 19th ICCRTS  The Best of All Possible Worlds:  Applying the Model Driven Architecture  Approach to a JC3IEDM  OWL  Ontology  Modeled in UML  Topic...utilization of the Ontology Definition Metamodel (ODM), a UML profile for expressing OWL constructs in that language, one can combine the power of semantic...modeling done in OWL with the substantive advantages associated with the Model Driven Architecture (MDA) software design approach, namely, the

  17. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    Science.gov (United States)

    Larour, Eric; Utke, Jean; Bovin, Anton; Morlighem, Mathieu; Perez, Gilberto

    2016-11-01

    Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM), written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1) carry out type changing through the ISSM, hence facilitating operator overloading, (2) bind to external solvers such as MUMPS and GSL-LU, and (3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  18. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Ziras, Charalampos

    2016-01-01

    and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators......, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging...... management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions...

  19. Applied impulsive mathematical models

    CERN Document Server

    Stamova, Ivanka

    2016-01-01

    Using the theory of impulsive differential equations, this book focuses on mathematical models which reflect current research in biology, population dynamics, neural networks and economics. The authors provide the basic background from the fundamental theory and give a systematic exposition of recent results related to the qualitative analysis of impulsive mathematical models. Consisting of six chapters, the book presents many applicable techniques, making them available in a single source easily accessible to researchers interested in mathematical models and their applications. Serving as a valuable reference, this text is addressed to a wide audience of professionals, including mathematicians, applied researchers and practitioners.

  20. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  1. Quantitative assessment of key parameters in qualitative vulnerability methods applied in karst systems based on an integrated numerical modelling approach

    Science.gov (United States)

    Doummar, Joanna; Kassem, Assaad

    2017-04-01

    In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.

  2. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    Directory of Open Access Journals (Sweden)

    Shi You

    2016-11-01

    Full Text Available The design and implementation of management policies for plug-in electric vehicles (PEVs need to be supported by a holistic understanding of the functional processes, their complex interactions, and their response to various changes. Models developed to represent different functional processes and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators, respectively. We start by explaining a structured modeling approach, i.e., a flexible combination of process models and system models, applied to different management and integration studies. A state-of-the-art overview of modeling approaches applied to represent several key processes, such as charging management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions of using existing, or developing their own, solutions for further applications.

  3. A Novel Approach Utilizing pnetCDF applying to the WRF-CMAQ two-way coupled model

    Science.gov (United States)

    Wong, David; Yang, Cheng-en; Mathur, Rohit; Pleim, Jonathan; Fu, Joshua; Wong, Kwai; Gao, Yang

    2014-05-01

    I/O is part of a scientific model and it takes up a significant portion of the simulation. There is no exception for the newly developed WRF-CMAQ two-way coupled model at US EPA. This two-way coupled meteorology and air quality model is composed of the Weather Research and Forecasting (WRF) model and the Community Multiscale Air Quality (CMAQ) model. We are using this two-way model to evaluate how accurate it simulates the effects of aerosol loading on radiative forcing between 1990 and 2010 when there were substantial aerosol emissions such as SO2 and NOx, reduction in North America and Europe. The I/O scheme in the current model does not make use of any parallel file system or parallel I/O approach. In addition the I/O takes about 15% - 28% of the entire simulation. Our novel approach not only utilizes pnetCDF parallel I/O technique but goes one step further to aggregate the data locally, i.e. along column dimension or row dimension in the spatial domain. This approach not only reduces the I/O traffic contention but also aggregated data enhances the I/O efficiency. In terms of I/O time, we have shown this method is about 6 to 10 times faster than the current existing I/O scheme in the model and about 20% - 3 times faster than strict application of pnetCDF. We are currently running the model on a Cray XE6 machine and finding ways to reduce the overall simulation time is crucial to the success to achieve our objective.

  4. Integrated Case-Based Applied Pathology (ICAP): a diagnostic-approach model for the learning and teaching of veterinary pathology.

    Science.gov (United States)

    Krockenberger, Mark B; Bosward, Katrina L; Canfield, Paul J

    2007-01-01

    Integrative Case-Based Applied Pathology (ICAP) cases form one component of learning and understanding the role of pathology in the veterinary diagnostic process at the Faculty of Veterinary Science, University of Sydney. It is a strategy that focuses on student-centered learning in a problem-solving context in the year 3 curriculum. Learning exercises use real case material and are primarily delivered online, providing flexibility for students with differing learning needs, who are supported by online, peer, and tutor support. The strategy relies heavily on the integration of pre-clinical and para-clinical information with the introduction of clinical material for the purposes of a logical three-level, problem-oriented approach to the diagnosis of disease. The focus is on logical diagnostic problem solving, primarily using gross pathology and histopathological material, with the inclusion of microbiological, parasitological, and clinical pathological data. The ICAP approach is linked to and congruent with the problem-oriented approach adopted in veterinary medicine and the case-based format used by one of the authors (PJC) for the teaching and learning of veterinary clinical pathology in year 4. Additionally, final-year students have the opportunity, during a diagnostic pathology rotation, to assist in the development and refinement of further ICAPs, which reinforces the importance of pathology in the veterinary diagnostic process. Evidence of the impact of the ICAP approach, based primarily on student surveys and staff peer feedback collected over five years, shows that discipline-specific learning, vertical and horizontal integration, alignment of learning outcomes and assessment, and both veterinary and generic graduate attributes were enhanced. Areas for improvement were identified in the approach, most specifically related to assistance in the development of generic teamwork skills.

  5. Position-space renormalization-group approach for driven diffusive systems applied to the asymmetric exclusion model.

    Science.gov (United States)

    Georgiev, Ivan T; McKay, Susan R

    2003-05-01

    This paper introduces a position-space renormalization-group approach for nonequilibrium systems and applies the method to a driven stochastic one-dimensional gas with open boundaries. The dynamics are characterized by three parameters: the probability alpha that a particle will flow into the chain to the leftmost site, the probability beta that a particle will flow out from the rightmost site, and the probability p that a particle will jump to the right if the site to the right is empty. The renormalization-group procedure is conducted within the space of these transition probabilities, which are relevant to the system's dynamics. The method yields a critical point at alpha(c)=beta(c)=1/2, in agreement with the exact values, and the critical exponent nu=2.71, as compared with the exact value nu=2.00.

  6. Position-space renormalization-group approach for driven diffusive systems applied to the asymmetric exclusion model

    Science.gov (United States)

    Georgiev, Ivan T.; McKay, Susan R.

    2003-05-01

    This paper introduces a position-space renormalization-group approach for nonequilibrium systems and applies the method to a driven stochastic one-dimensional gas with open boundaries. The dynamics are characterized by three parameters: the probability α that a particle will flow into the chain to the leftmost site, the probability β that a particle will flow out from the rightmost site, and the probability p that a particle will jump to the right if the site to the right is empty. The renormalization-group procedure is conducted within the space of these transition probabilities, which are relevant to the system’s dynamics. The method yields a critical point at αc=βc=1/2, in agreement with the exact values, and the critical exponent ν=2.71, as compared with the exact value ν=2.00.

  7. Probabilistic correction of precipitation measurement errors using a Bayesian Model Average Approach applied for the estimation of glacier accumulation

    Science.gov (United States)

    Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

    2013-04-01

    Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up

  8. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang; changxue; Su; na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’ communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  9. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang changxue; Su na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  10. Applying a System Dynamics Approach for Modeling Groundwater Dynamics to Depletion under Different Economical and Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Hamid Balali

    2015-09-01

    Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.

  11. Applying an Inverse Model to Estimate Ammonia Emissions at Cattle Feedlots Using Three Different Observation-Based Approaches

    Science.gov (United States)

    Shonkwiler, K. B.; Ham, J. M.; Nash, C.

    2014-12-01

    from the inverse model (FIDES) using all three datasets will be compared to emissions from the bLS model (WindTrax) using only high speed data (laser; CRDS). Results may lend further validity to the conditional sampler approach for more easily and accurately monitoring NH3 fluxes from CAFOs and other strong areal sources.

  12. North American paleoclimate reconstructions for the Last Glacial Maximum using an inverse modeling through iterative forward modeling approach applied to pollen data

    Science.gov (United States)

    Izumi, Kenji; Bartlein, Patrick J.

    2016-10-01

    The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.

  13. Human motion classification using a particle filter approach: multiple model particle filtering applied to the micro-Doppler spectrum

    NARCIS (Netherlands)

    Groot, S.; Harmanny, R.; Driessen, H.; Yarovoy, A.

    2013-01-01

    In this article, a novel motion model-based particle filter implementation is proposed to classify human motion and to estimate key state variables, such as motion type, i.e. running or walking, and the subject’s height. Micro-Doppler spectrum is used as the observable information. The system and

  14. From basic physics to mechanisms of toxicity: the "liquid drop" approach applied to develop predictive classification models for toxicity of metal oxide nanoparticles.

    Science.gov (United States)

    Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy

    2014-11-21

    Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the "liquid drop" model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.

  15. Pesticide residues in heterogeneous plant populations, a model-based approach applied to nematicides in banana (Musa spp.).

    Science.gov (United States)

    Tixier, Philippe; Chabrier, Christian; Malézieux, Eric

    2007-03-21

    Nematicides are widely used to control plant-parasitic nematodes in intensive export banana (Musa spp.) cropping systems. Data show that the concentration of fosthiazate in banana fruits varies from zero to 0.035 g kg-1, under the maximal residue limit (MRL=0.05 mg kg-1). The fosthiazate concentration in fruit is described by a Gaussian envelope curve function of the interval between pesticide application and fruit harvest (preharvest interval). The heterogeneity of phenological stages in a banana population increases over time, and thus the preharvest interval of fruits harvested after a pesticide application varies over time. A phenological model was used to simulate the long-term harvest dynamics of banana at field scale. Simulations show that the mean fosthiazate concentration in fruits varies according to nematicide application program, climate (temperature), and planting date of the banana field. This method is used to assess the percentage of harvested bunches that exceed a residue threshold and to help farmers minimize fosthiazate residues in bananas.

  16. Parameters Approach Applied on Nonlinear Oscillators

    Directory of Open Access Journals (Sweden)

    Najeeb Alam Khan

    2014-01-01

    Full Text Available We applied an approach to obtain the natural frequency of the generalized Duffing oscillator u¨ + u + α3u3 + α5u5 + α7u7 + ⋯ + αnun=0 and a nonlinear oscillator with a restoring force which is the function of a noninteger power exponent of deflection u¨+αu|u|n−1=0. This approach is based on involved parameters, initial conditions, and collocation points. For any arbitrary power of n, the approximate frequency analysis is carried out between the natural frequency and amplitude. The solution procedure is simple, and the results obtained are valid for the whole solution domain.

  17. From basic physics to mechanisms of toxicity: the ``liquid drop'' approach applied to develop predictive classification models for toxicity of metal oxide nanoparticles

    Science.gov (United States)

    Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy

    2014-10-01

    Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the ``liquid drop'' model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were

  18. Smart Kd-values, their uncertainties and sensitivities - Applying a new approach for realistic distribution coefficients in geochemical modeling of complex systems.

    Science.gov (United States)

    Stockmann, M; Schikora, J; Becker, D-A; Flügge, J; Noseck, U; Brendler, V

    2017-08-23

    One natural retardation process to be considered in risk assessment for contaminants in the environment is sorption on mineral surfaces. A realistic geochemical modeling is of high relevance in many application areas such as groundwater protection, environmental remediation, or disposal of hazardous waste. Most often concepts with constant distribution coefficients (Kd-values) are applied in geochemical modeling with the advantage to be simple and computationally fast, but not reflecting changes in geochemical conditions. In this paper, we describe an innovative and efficient method, where the smart Kd-concept, a mechanistic approach mainly based on surface complexation modeling, is used (and modified for complex geochemical models) to calculate and apply realistic distribution coefficients. Using the geochemical speciation code PHREEQC, multidimensional smart Kd-matrices are computed as a function of varying (or uncertain) environmental conditions. On the one hand, sensitivity and uncertainty statements for the distribution coefficients can be derived. On the other hand, smart Kd-matrices can be used in reactive transport (or migration) codes (not shown here). This strategy has various benefits: (1) rapid computation of Kd-values for large numbers of environmental parameter combinations; (2) variable geochemistry is taken into account more realistically; (3) efficiency in computing time is ensured, and (4) uncertainty and sensitivity analysis are accessible. Results are presented exemplarily for the sorption of uranium(VI) onto a natural sandy aquifer material and are compared to results based on the conventional Kd-concept. In general, the sorption behavior of U(VI) in dependence of changing geochemical conditions is described quite well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Applying a managerial approach to day surgery.

    Science.gov (United States)

    Onetti, Alberto

    2008-01-01

    The present article explores the day surgery topic assuming a managerial perspective. If we assume such a perspective, day surgery can be considered as a business model decision care and not just a surgical procedure alternative to the traditional ones requiring patient hospitalization. In this article we highlight the main steps required to develop a strategic approach [Cotta Ramusino E, Onetti A. Strategia d'Impresa. Milano; Il Sole 24 Ore; Second Edition, 2007] at hospital level (Onetti A, Greulich A. Strategic management in hospitals: the balanced scorecard approach. Milano: Giuffé; 2003) and to make day surgery part of it. It means understanding: - how and when day surgery can improve the health care providers' overall performance both in terms of clinical effectiveness and financial results, and, - how to organize and integrate it with the other hospital activities in order to make it work. Approaching day surgery as a business model decision requires to address in advance a list of potential issues and necessitates of continued audit to verify the results. If it does happen, day surgery can be both safe and cost effective and impact positively on surgical patient satisfaction. We propose a sort of "check-up list" useful to hospital managers and doctors that are evaluating the option of introducing day surgery or are trying to optimize it.

  20. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  1. Evaluating tidal marsh sustainability in the face of sea-level rise: a hybrid modeling approach applied to San Francisco Bay.

    Directory of Open Access Journals (Sweden)

    Diana Stralberg

    Full Text Available BACKGROUND: Tidal marshes will be threatened by increasing rates of sea-level rise (SLR over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities. METHODOLOGY: Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios. PRINCIPAL FINDINGS: Model results indicated that under a high rate of SLR (1.65 m/century, short-term restoration of diked subtidal baylands to mid marsh elevations (-0.2 m MHHW could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss. Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats. CONCLUSIONS/SIGNIFICANCE: Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise

  2. Applying discursive approaches to health psychology.

    Science.gov (United States)

    Seymour-Smith, Sarah

    2015-04-01

    The aim of this paper is to outline the contribution of two strands of discursive research, glossed as 'macro' and 'micro,' to the field of health psychology. A further goal is to highlight some contemporary debates in methodology associated with the use of interview data versus more naturalistic data in qualitative health research. Discursive approaches provide a way of analyzing talk as a social practice that considers how descriptions are put together and what actions they achieve. A selection of recent examples of discursive research from one applied area of health psychology, studies of diet and obesity, are drawn upon in order to illustrate the specifics of both strands. 'Macro' discourse work in psychology incorporates a Foucauldian focus on the way that discourses regulate subjectivities, whereas the concept of interpretative repertoires affords more agency to the individual: both are useful for identifying the cultural context of talk. Both 'macro' and 'micro' strands focus on accountability to varying degrees. 'Micro' Discursive Psychology, however, pays closer attention to the sequential organization of constructions and focuses on naturalistic settings that allow for the inclusion of an analysis of the health professional. Diets are typically depicted as an individual responsibility in mainstream health psychology, but discursive research highlights how discourses are collectively produced and bound up with social practices. (c) 2015 APA, all rights reserved).

  3. Applying SF-Based Genre Approaches to English Writing Class

    Science.gov (United States)

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  4. Tennis Coaching: Applying the Game Sense Approach

    Science.gov (United States)

    Pill, Shane; Hewitt, Mitchell

    2017-01-01

    This article demonstrates the game sense approach for teaching tennis to novice players. In a game sense approach, learning is positioned within modified games to emphasize the way rules shape game behavior, tactical awareness, decision-making and the development of contextualized stroke mechanics.

  5. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  6. Modeling Variability in the Progression of Huntington's Disease A Novel Modeling Approach Applied to Structural Imaging Markers from TRACK‐HD

    Science.gov (United States)

    Sampaio, C

    2016-01-01

    We present a novel, general class of disease progression models for Huntington's disease (HD), a neurodegenerative disease caused by a cytosine‐adenine‐guanine (CAG) triplet repeat expansion on the huntingtin gene. Models are fit to a selection of structural imaging markers from the TRACK 36‐month database. The models are of mixed effects type and should be useful in predicting any continuous marker of HD state as a function of age and CAG length (the genetic factor that drives HD pathology). The effects of age and CAG length are modeled using flexible regression splines. Variability not accounted for by age, CAG length, or covariates is modeled using terms that represent measurement error, population variability (random slopes/intercepts), and variability due to the dynamics of the disease process (random walk terms). A Kalman filter is used to estimate variances of the random walk terms. PMID:27481337

  7. Markov Model Applied to Gene Evolution

    Institute of Scientific and Technical Information of China (English)

    季星来; 孙之荣

    2001-01-01

    The study of nucleotide substitution is very important both to our understanding of gene evolution and to reliable estimation of phylogenetic relationships. In this paper nucleotide substitution is assumed to be random and the Markov model is applied to the study of the evolution of genes. Then a non-linear optimization approach is proposed for estimating substitution in real sequences. This substitution is called the "Nucleotide State Transfer Matrix". One of the most important conclusions from this work is that gene sequence evolution conforms to the Markov process. Also, some theoretical evidences for random evolution are given from energy analysis of DNA replication.

  8. Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling

    OpenAIRE

    Mallet, Vivien; Sportisse, Bruno

    2006-01-01

    International audience; This paper estimates the uncertainty in the outputs of a chemistry-transport model due to physical parameterizations and numerical approximations. An ensemble of 20 simulations is generated from a reference simulation in which one key parameterization (chemical mechanism, dry deposition parameterization, turbulent closure, etc.) or one numerical approximation (grid size, splitting method, etc.) is changed at a time. Intercomparisons of the simulations and comparisons w...

  9. Applying a gaming approach to IP strategy.

    Science.gov (United States)

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.

  10. Frequency and damping ratio assessment of high-rise buildings using an Automatic Model-Based Approach applied to real-world ambient vibration recordings

    Science.gov (United States)

    Nasser, Fatima; Li, Zhongyang; Gueguen, Philippe; Martin, Nadine

    2016-06-01

    This paper deals with the application of the Automatic Model-Based Approach (AMBA) over actual buildings subjected to real-world ambient vibrations. In a previous paper, AMBA was developed with the aim of automating the estimation process of the modal parameters and minimizing the estimation error, especially that of the damping ratio. It is applicable over a single-channel record, has no parameters to be set, and no manual initialization phase. The results presented in this paper should be regarded as further documentation of the approach over real-world ambient vibration signals.

  11. Applying a weed risk assessment approach to GM crops.

    Science.gov (United States)

    Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe

    2014-12-01

    Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.

  12. Applying phasor approach analysis of multiphoton FLIM measurements to probe the metabolic activity of three-dimensional in vitro cell culture models.

    Science.gov (United States)

    Lakner, Pirmin H; Monaghan, Michael G; Möller, Yvonne; Olayioye, Monilola A; Schenke-Layland, Katja

    2017-02-13

    Fluorescence lifetime imaging microscopy (FLIM) can measure and discriminate endogenous fluorophores present in biological samples. This study seeks to identify FLIM as a suitable method to non-invasively detect a shift in cellular metabolic activity towards glycolysis or oxidative phosphorylation in 3D Caco-2 models of colorectal carcinoma. These models were treated with potassium cyanide or hydrogen peroxide as controls, and epidermal growth factor (EGF) as a physiologically-relevant influencer of cell metabolic behaviour. Autofluorescence, attributed to nicotinamide adenine dinucleotide (NADH), was induced by two-photon laser excitation and its lifetime decay was analysed using a standard multi-exponential decay approach and also a novel custom-written code for phasor-based analysis. While both methods enabled detection of a statistically significant shift of metabolic activity towards glycolysis using potassium cyanide, and oxidative phosphorylation using hydrogen peroxide, employing the phasor approach required fewer initial assumptions to quantify the lifetimes of contributing fluorophores. 3D Caco-2 models treated with EGF had increased glucose consumption, production of lactate, and presence of ATP. FLIM analyses of these cultures revealed a significant shift in the contribution of protein-bound NADH towards free NADH, indicating increased glycolysis-mediated metabolic activity. This data demonstrate that FLIM is suitable to interpret metabolic changes in 3D in vitro models.

  13. Applying the Sport Education Model to Tennis

    Science.gov (United States)

    Ayvazo, Shiri

    2009-01-01

    The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…

  14. A novel approach for modeling malaria incidence using complex categorical household data: The minimum message length (MML method applied to Indonesian data

    Directory of Open Access Journals (Sweden)

    Gerhard Visser

    2012-09-01

    Full Text Available We investigated the application of a Minimum Message Length (MML modeling approach to identify the simplest model that would explain two target malaria incidence variables: incidence in the short term and on the average longer term, in two areas in Indonesia, based on a range of ecological variables including environmental and socio-economic ones. The approach is suitable for dealing with a variety of problems such as complexity and where there are missing values in the data. It can detect weak relations, is resistant to overfittingand can show the way in which many variables, working together, contribute to explaining malaria incidence. This last point is a major strength of the method as it allows many variables to be analysed. Data were obtained at household level by questionnaire for villages in West Timor and Central Java. Data were collected on 26 variables in nine categories: stratum (a village-level variable based on the API/AMI categories, ecology, occupation, preventative measures taken, health care facilities, the immediate environment, household characteristics, socio-economic status and perception of malaria cause. Several models were used and the simplest (best model, that is the one with the minimum message length was selected for each area. The results showed that consistent predictors of malaria included combinations of ecology (coastal, preventative (clean backyard and environment (mosquito breeding place, garden and rice cultivation. The models also showed that most of the other variables were not good predictors and this is discussed in the paper. We conclude that the method has potential for identifying simple predictors of malaria and that it could be used to focus malaria management on combinations of variables rather than relying on single ones that may not be consistently reliable.

  15. Applied groundwater modeling, 2nd Edition

    Science.gov (United States)

    Anderson, Mary P.; Woessner, William W.; Hunt, Randall J.

    2015-01-01

    This second edition is extensively revised throughout with expanded discussion of modeling fundamentals and coverage of advances in model calibration and uncertainty analysis that are revolutionizing the science of groundwater modeling. The text is intended for undergraduate and graduate level courses in applied groundwater modeling and as a comprehensive reference for environmental consultants and scientists/engineers in industry and governmental agencies.

  16. Critical Applied Linguistics: An Evaluative Interdisciplinary Approach in Criticism and Evaluation of Applied Linguistics’ Disciplines

    Directory of Open Access Journals (Sweden)

    H. Davari

    2015-11-01

    Full Text Available The emergence of some significant critical approaches and directions in the field of applied linguistics from the mid-1980s onwards has met with various positive and opposite reactions. On the basis of their strength and significance, such approaches and directions have challenged some of the mainstream approaches’ claims, principles and assumptions. Among them, critical applied linguistics can be highlighted as a new approach, developed by the Australian applied linguist, Alastair Pennycook. The aspects, domains and concerns of this new approach were introduced in his book in 2001. Due to the undeniable importance of this approach, as well as partial negligence regarding it in Iranian academic setting, this paper first intends to introduce this approach, as an approach that evaluates various disciplines of applied linguistics through its own specific principles and interests. Then, in order to show its step-by-step application in the evaluation of different disciplines of applied linguistics, with a glance at its significance and appropriateness in Iranian society, two domains, namely English language education and language policy and planning, are introduced and evaluated in order to provide readers with a visible and practical picture of its interdisciplinary nature and evaluative functions. The findings indicate the efficacy of applying this interdisciplinary framework in any language-in-education policy and planning in accordance with the political, social and cultural context of the target society.

  17. Educational software design: applying models of learning

    Directory of Open Access Journals (Sweden)

    Stephen Richards

    1996-12-01

    Full Text Available The model of learning adopted within this paper is the 'spreading ripples' (SR model proposed by Race (1994. This model was chosen for two important reasons. First, it makes use of accessible ideas and language, .and is therefore simple. Second, .Race suggests that the model can be used in the design, of educational and training programmes (and can thereby be applied to the design of computer-based learning materials.

  18. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  19. Effects of stand composition and thinning in mixed-species forests : a modeling approach applied to Douglas-fir and beech

    NARCIS (Netherlands)

    Bartelink, H.H.

    2000-01-01

    Models estimating growth and yield of forest stands provide important tools for forest management. Pure stands have been modeled extensively and successfully for decades; however, relatively few models for mixed-species stands have been developed. A spatially explicit, mechanistic model (COMMIX) is

  20. Applying the WEAP Model to Water Resource

    DEFF Research Database (Denmark)

    Gao, Jingjing; Christensen, Per; Li, Wei

    Water resources assessment is a tool to provide decision makers with an appropriate basis to make informed judgments regarding the objectives and targets to be addressed during the Strategic Environmental Assessment (SEA) process. The study shows how water resources assessment can be applied in SEA...... in assessing the effects on water resources using a case study on a Coal Industry Development Plan in an arid region in North Western China. In the case the WEAP model (Water Evaluation And Planning System) were used to simulate various scenarios using a diversity of technological instruments like irrigation...... efficiency, treatment and reuse of water. The WEAP model was applied to the Ordos catchment where it was used for the first time in China. The changes in water resource utilization in Ordos basin were assessed with the model. It was found that the WEAP model is a useful tool for water resource assessment...

  1. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...

  2. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  3. Changes in fruit sugar concentrations in response to assimilate supply, metabolism and dilution: a modeling approach applied to peach fruit (Prunus persica).

    Science.gov (United States)

    Génard, M; Lescourret, F; Gomez, L; Habib, R

    2003-04-01

    The influence of assimilate supply, metabolism and dilution on sugar concentrations in the mesocarp of peach (Prunus persica (L.) Batsch) fruit during the main stage of fruit enlargement was analyzed with the SUGAR model of Génard and Souty (1996). The model predicts the partitioning of carbon into sucrose, sorbitol, glucose and fructose in the mesocarp of peach fruit. Based on measured data and the model, we determined values for the relative rates of sugar transformation. These rates were constant, varied with time or varied with relative fruit growth rate, depending on the type of sugar. Equations were derived to describe these rates and incorporated into the SUGAR model. The model simulated the effects of changing assimilate supply and fruit volume on sugar concentrations. The set of equations for the SUGAR model was rewritten to include the three components influencing sugar concentrations: assimilate supply, metabolism and dilution. The sugar types differed in sensitivity to these components. Sucrose was highly sensitive to changes in assimilate supply and to the dilution effect; it was not subject to intense metabolic transformation. Sorbitol was the most important carbohydrate in fruit metabolism, which explains why the sorbitol concentration was always low despite the strong positive effect of assimilate supply. The reducing sugars constituted a transitory storage pool and their concentrations were closely related to metabolism.

  4. Focus Groups: A Practical and Applied Research Approach for Counselors

    Science.gov (United States)

    Kress, Victoria E.; Shoffner, Marie F.

    2007-01-01

    Focus groups are becoming a popular research approach that counselors can use as an efficient, practical, and applied method of gathering information to better serve clients. In this article, the authors describe focus groups and their potential usefulness to professional counselors and researchers. Practical implications related to the use of…

  5. Fuzzy sets, rough sets, and modeling evidence: Theory and Application. A Dempster-Shafer based approach to compromise decision making with multiattributes applied to product selection

    Science.gov (United States)

    Dekorvin, Andre

    1992-01-01

    The Dempster-Shafer theory of evidence is applied to a multiattribute decision making problem whereby the decision maker (DM) must compromise with available alternatives, none of which exactly satisfies his ideal. The decision mechanism is constrained by the uncertainty inherent in the determination of the relative importance of each attribute element and the classification of existing alternatives. The classification of alternatives is addressed through expert evaluation of the degree to which each element is contained in each available alternative. The relative importance of each attribute element is determined through pairwise comparisons of the elements by the decision maker and implementation of a ratio scale quantification method. Then the 'belief' and 'plausibility' that an alternative will satisfy the decision maker's ideal are calculated and combined to rank order the available alternatives. Application to the problem of selecting computer software is given.

  6. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  7. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  8. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  9. Classical mechanics approach applied to analysis of genetic oscillators.

    Science.gov (United States)

    Vasylchenkova, Anastasiia; Mraz, Miha; Zimic, Nikolaj; Moskon, Miha

    2016-04-05

    Biological oscillators present a fundamental part of several regulatory mechanisms that control the response of various biological systems. Several analytical approaches for their analysis have been reported recently. They are, however, limited to only specific oscillator topologies and/or to giving only qualitative answers, i.e., is the dynamics of an oscillator given the parameter space oscillatory or not. Here we present a general analytical approach that can be applied to the analysis of biological oscillators. It relies on the projection of biological systems to classical mechanics systems. The approach is able to provide us with relatively accurate results in the meaning of type of behaviour system reflects (i.e. oscillatory or not) and periods of potential oscillations without the necessity to conduct expensive numerical simulations. We demonstrate and verify the proposed approach on three different implementations of amplified negative feedback oscillator.

  10. Applying computer simulation models as learning tools in fishery management

    Science.gov (United States)

    Johnson, B.L.

    1995-01-01

    Computer models can be powerful tools for addressing many problems in fishery management, but uncertainty about how to apply models and how they should perform can lead to a cautious approach to modeling. Within this approach, we expect models to make quantitative predictions but only after all model inputs have been estimated from empirical data and after the model has been tested for agreement with an independent data set. I review the limitations to this approach and show how models can be more useful as tools for organizing data and concepts, learning about the system to be managed, and exploring management options. Fishery management requires deciding what actions to pursue to meet management objectives. Models do not make decisions for us but can provide valuable input to the decision-making process. When empirical data are lacking, preliminary modeling with parameters derived from other sources can help determine priorities for data collection. When evaluating models for management applications, we should attempt to define the conditions under which the model is a useful, analytical tool (its domain of applicability) and should focus on the decisions made using modeling results, rather than on quantitative model predictions. I describe an example of modeling used as a learning tool for the yellow perch Perca flavescens fishery in Green Bay, Lake Michigan.

  11. Applying Digital Sensor Technology: A Problem-Solving Approach

    Science.gov (United States)

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  12. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  13. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  14. Forecast model applied to quality control with autocorrelational data

    Directory of Open Access Journals (Sweden)

    Adriano Mendonça Souza

    2013-11-01

    Full Text Available This research approaches the prediction models applied to industrial processes, in order to check the stability of the process by means of control charts, applied to residues from linear modeling. The data used for analysis refers to the moisture content, permeability and compression resistance to the green (RCV, belonging to the casting process of green sand molding in A Company, which operates in the casting and machining, for which dynamic multivariate regression model was set. As the observations were auto-correlated, it was necessary to seek a mathematical model that produces independent and identically distribuibed residues. The models found make possible to understand the variables behavior, assisting in the achievement of the forecasts and in the monitoring of the referred process. Thus, it can be stated that the moisture content is very unstable comparing to the others variables.

  15. Image-Based Learning Approach Applied to Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    J. C. Chimal-Eguía

    2012-06-01

    Full Text Available In this paper, a new learning approach based on time-series image information is presented. In order to implementthis new learning technique, a novel time-series input data representation is also defined. This input datarepresentation is based on information obtained by image axis division into boxes. The difference between this newinput data representation and the classical is that this technique is not time-dependent. This new information isimplemented in the new Image-Based Learning Approach (IBLA and by means of a probabilistic mechanism thislearning technique is applied to the interesting problem of time series forecasting. The experimental results indicatethat by using the methodology proposed in this article, it is possible to obtain better results than with the classicaltechniques such as artificial neuronal networks and support vector machines.

  16. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  17. Terahertz spectroscopy applied to food model systems

    DEFF Research Database (Denmark)

    Møller, Uffe

    Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult to differ...... to differentiate between these types of water in subsequent quality controls. This thesis describes terahertz time-domain spectroscopy applied on aqueous food model systems, with particular focus on ethanol-water mixtures and confined water pools in inverse micelles.......Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult...

  18. Online traffic flow model applying dynamic flow-density relation

    CERN Document Server

    Kim, Y

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic fl...

  19. Applying a Problem Based Learning Approach to Land Management Education

    DEFF Research Database (Denmark)

    Enemark, Stig

    -world context. The combination of different disciplines can be taught through a “learning-by-doing approach”. Problem solving skills can be taught through a project-oriented approach to surveying education with a focus on developing skills for “learning to learn”. The basic principles of this educational model...... engineering focus toward adopting an interdisciplinary and problem-based approach to ensure that academic programmes can cope with the wide range of land administration functions and challenges. An interdisciplinary approach to surveying education calls for the need to address issues and problems in a real...... are presented using the surveying programme at Aalborg University as an example. This paper is work in progress and draws from previous research. The paper supports the lecture on Problem Based Learning given at NUST 3 March 2016....

  20. A Cooperation Model Applied in a Kindergarten

    Directory of Open Access Journals (Sweden)

    Jose I. Rodriguez

    2011-10-01

    Full Text Available The need for collaboration in a global world has become a key factor for success for many organizations and individuals. However in several regions and organizations in the world, it has not happened yet. One of the settings where major obstacles occur for collaboration is in the business arena, mainly because of competitive beliefs that cooperation could hurt profitability. We have found such behavior in a wide variety of countries, in advanced and developing economies. Such cultural behaviors or traits characterized entrepreneurs by working in isolation, avoiding the possibilities of building clusters to promote regional development. The needs to improve the essential abilities that conforms cooperation are evident. It is also very difficult to change such conduct with adults. So we decided to work with children to prepare future generations to live in a cooperative world, so badly hit by greed and individualism nowadays. We have validated that working with children at an early age improves such behavior. This paper develops a model to enhance the essential abilities in order to improve cooperation. The model has been validated by applying it at a kindergarten school.

  1. The average acceleration approach applied to gravity coefficients recovery based on GOCE orbits

    Directory of Open Access Journals (Sweden)

    Huang Qiang

    2012-11-01

    Full Text Available The average acceleration approach was applied to recover a gravity field model Model_ACA from GOCE precise science orbits from September 2 to November 2, 2010, and furthermore a so called sequential least square adjustment was used. The model was compared with other gravity field models based on CHAMP, GRACE and GOCE. The result shows that the model is superior to gravity field based on CHAMP, and with higher accuracy than other international gravity field models based on only GOCE data before 80 degree. The degree geoid height of Model_ACA reaches 3 cm up to 90 degree and order.

  2. A new kinetic biphasic approach applied to biodiesel process intensification

    Energy Technology Data Exchange (ETDEWEB)

    Russo, V.; Tesser, R.; Di Serio, M.; Santacesaria, E. [Naples Univ. (Italy). Dept. of Chemistry

    2012-07-01

    Many different papers have been published on the kinetics of the transesterification of vegetable oil with methanol, in the presence of alkaline catalysts to produce biodiesel. All the proposed approaches are based on the assumption of a pseudo-monophasic system. The consequence of these approaches is that some experimental aspects cannot be described. For the reaction performed in batch conditions, for example, the monophasic approach is not able to reproduce the different plateau obtained by using different amount of catalyst or the induction time observed at low stirring rates. Moreover, it has been observed by operating in continuous reactors that micromixing has a dramatic effect on the reaction rate. At this purpose, we have recently observed that is possible to obtain a complete conversion to biodiesel in less than 10 seconds of reaction time. This observation is confirmed also by other authors using different types of reactors like: static mixers, micro-reactors, oscillatory flow reactors, cavitational reactors, microwave reactors or centrifugal contactors. In this work we will show that a recently proposed biphasic kinetic approach is able to describe all the aspects before mentioned that cannot be described by the monophasic kinetic model. In particular, we will show that the biphasic kinetic model can describe both the induction time observed in the batch reactors, at low stirring rate, and the very high conversions obtainable in a micro-channel reactor. The adopted biphasic kinetic model is based on a reliable reaction mechanism that will be validated by the experimental evidences reported in this work. (orig.)

  3. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  4. Socio-optics: optical knowledge applied in modeling social phenomena

    Science.gov (United States)

    Chisleag, Radu; Chisleag Losada, Ioana-Roxana

    2011-05-01

    The term "Socio-optics" (as a natural part of Socio-physics), is rather not found in literature or at Congresses. In Optics books, there are not made references to optical models applied to explain social phenomena, in spite of Optics relying on the duality particle-wave which seems convenient to model relationships among society and its members. The authors, who have developed a few models applied to explain social phenomena based on knowledge in Optics, along with a few other models applying, in Social Sciences, knowledge from other branches of Physics, give their own examples of such optical models, f. e., of relationships among social groups and their sub-groups, by using kowledge from partially coherent optical phenomena or to explain by tunnel effect, the apparently impossible penetration of social barriers by individuals. They consider that the term "Socio-optics" may come to life. There is mentioned the authors' expertise in stimulating Socio-optics approach by systematically asking students taken courses in Optics to find applications of the newly got Wave and Photon Optics knowledge, to model social and even everyday life phenomena, eventually engaging in such activities other possibly interested colleagues.

  5. Guiding Mobile Robot by Applying Fuzzy Approach on Sonar Sensors

    Directory of Open Access Journals (Sweden)

    Ahmed Rahman Jasim

    2010-01-01

    Full Text Available This study describes how fuzzy logic control FLC can be applied to sonars of mobile robot. The fuzzy logic approach has effects on the navigation of mobile robots in a partially known environment that are used in different industrial and society applications. The fuzzy logic provides a mechanism for combining sensor data from all sonar sensors which present different information. The FLC approach is achieved by means of Fuzzy Decision Making method type of fuzzy logic controller. The proposed controller is responsible for the obstacle avoidance of the mobile robot while traveling through a map from a home point to a goal point. The FLC is built as a subprogram based on the intelligent architecture (IA. The software program uses the Advanced Robotics Interface for Applications (ARIA, it is programmed with C++ package ( Visual C++.Net , and Networking software is used for setup Wireless TCP/IP Ethernet-to-Serial connection between robot and PC. The results show that the developed mobile robot travels successfully from one location to another and reaches its goal after avoiding all obstacles that are located in its way. The platform mobile robot is a Pioneer 3 DX that is equipped with Sonar sensors.

  6. Practical approach to apply range image sensors in machine automation

    Science.gov (United States)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  7. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  8. Applying a new ensemble approach to estimating stock status of marine fisheries around the world

    DEFF Research Database (Denmark)

    Rosenberg, Andrew A.; Kleisner, Kristin M.; Afflerbach, Jamie

    2017-01-01

    The exploitation status of marine fisheries stocks worldwide is of critical importance for food security, ecosystem conservation, and fishery sustainability. Applying a suite of data-limited methods to global catch data, combined through an ensemble modeling approach, we provide quantitative esti...

  9. Geothermal potential assessment for a low carbon strategy : A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The metho

  10. Geothermal potential assessment for a low carbon strategy : A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The metho

  11. Geothermal potential assessment for a low carbon strategy: A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M.P.D.; Santilano, A.; Wees, J.D. van; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  12. Geothermal potential assessment for a low carbon strategy : A new systematic approach applied in southern Italy

    NARCIS (Netherlands)

    Trumpy, E.; Botteghi, S.; Caiozzi, F.; Donato, A.; Gola, G.; Montanari, D.; Pluymaekers, M. P D; Santilano, A.; van Wees, J. D.; Manzella, A.

    2016-01-01

    In this study a new approach to geothermal potential assessment was set up and applied in four regions in southern Italy. Our procedure, VIGORThermoGIS, relies on the volume method of assessment and uses a 3D model of the subsurface to integrate thermal, geological and petro-physical data. The

  13. Molecular modeling: An open invitation for applied mathematics

    Science.gov (United States)

    Mezey, Paul G.

    2013-10-01

    Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.

  14. The applying stakeholder approach to strategic management of territories development

    Directory of Open Access Journals (Sweden)

    Ilshat Azamatovich Tazhitdinov

    2013-06-01

    Full Text Available In the paper, the aspects of the strategic management of socioeconomic development of territories in terms of stakeholder approach are discussed. The author's interpretation of the concept of stakeholder sub-region is proposed, and their classification into internal and external to the territorial socioeconomic system of sub-regional level is offered. The types of interests and types of resources stakeholders in the sub-region are identified, and at the same time the correlation of interests and resources allows to determine the groups (alliances stakeholders, which ensure the balance of interests depending on the certain objectives of the association. The conceptual stakeholder agent model of management of strategic territorial development within the hierarchical system of «region — sub-region — municipal formation,» is proposed. All stakeholders there are considered as the influence agents directing its own resources to provide a comprehensive approach to management territorial development. The interaction between all the influence agents of the «Region — Sub-region — municipal formation» is provided vertically and horizontally through the initialization of the development and implementation of strategic documents of the sub-region. Vertical interaction occurs between stakeholders such as government and municipal authorities being as a guideline, and the horizontal — between the rests of them being as a partnership. Within the proposed model, the concurrent engineering is implemented, which is a form of inter-municipal strategic cooperation of local government municipalities for the formation and analyzing a set of alternatives of the project activities in the sub-region in order to choose the best options. The proposed approach was tested in the development of medium-term comprehensive program of socioeconomic development of the Zauralye and sub-regions of the North-East of the Republic of Bashkortostan (2011–2015.

  15. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  16. Views on Montessori Approach by Teachers Serving at Schools Applying the Montessori Approach

    Science.gov (United States)

    Atli, Sibel; Korkmaz, A. Merve; Tastepe, Taskin; Koksal Akyol, Aysel

    2016-01-01

    Problem Statement: Further studies on Montessori teachers are required on the grounds that the Montessori approach, which, having been applied throughout the world, holds an important place in the alternative education field. Yet it is novel for Turkey, and there are only a limited number of studies on Montessori teachers in Turkey. Purpose of…

  17. Agent-Based Modelling applied to 5D model of the HIV infection

    Directory of Open Access Journals (Sweden)

    Toufik Laroum

    2016-12-01

    The simplest model was the 3D mathematical model. But the complexity of this phenomenon and the diversity of cells and actors which affect its evolution requires the use of new approaches such as multi-agents approach that we have applied in this paper. The results of our simulator on the 5D model are promising because they are consistent with biological knowledge’s. Therefore, the proposed approach is well appropriate to the study of population dynamics in general and could help to understand and predict the dynamics of HIV infection.

  18. Applying mechanistic models in bioprocess development

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita; Bodla, Vijaya Krishna; Carlquist, Magnus

    2013-01-01

    models should be combined with proper model analysis tools, such as uncertainty and sensitivity analysis. When assuming distributed inputs, the resulting uncertainty in the model outputs can be decomposed using sensitivity analysis to determine which input parameters are responsible for the major part...... of the output uncertainty. Such information can be used as guidance for experimental work; i.e., only parameters with a significant influence on model outputs need to be determined experimentally. The use of mechanistic models and model analysis tools is demonstrated in this chapter. As a practical case study......, experimental data from Saccharomyces cerevisiae fermentations are used. The data are described with the well-known model of Sonnleitner and Käppeli (Biotechnol Bioeng 28:927-937, 1986) and the model is analyzed further. The methods used are generic, and can be transferred easily to other, more complex case...

  19. Applied Creativity: The Creative Marketing Breakthrough Model

    Science.gov (United States)

    Titus, Philip A.

    2007-01-01

    Despite the increasing importance of personal creativity in today's business environment, few conceptual creativity frameworks have been presented in the marketing education literature. The purpose of this article is to advance the integration of creativity instruction into marketing classrooms by presenting an applied creative marketing…

  20. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  1. Applying MDL to Learning Best Model Granularity

    CERN Document Server

    Gao, Q; Vitanyi, P; Gao, Qiong; Li, Ming; Vitanyi, Paul

    2000-01-01

    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic...

  2. Biplot models applied to cancer mortality rates.

    Science.gov (United States)

    Osmond, C

    1985-01-01

    "A graphical method developed by Gabriel to display the rows and columns of a matrix is applied to tables of age- and period-specific cancer mortality rates. It is particularly useful when the pattern of age-specific rates changes with time. Trends in age-specific rates and changes in the age distribution are identified as projections. Three examples [from England and Wales] are given."

  3. Applying artificial intelligence to clinical guidelines: the GLARE approach.

    Science.gov (United States)

    Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Molino, Gianpaolo; Torchio, Mauro

    2008-01-01

    We present GLARE, a domain-independent system for acquiring, representing and executing clinical guidelines (GL). GLARE is characterized by the adoption of Artificial Intelligence (AI) techniques in the definition and implementation of the system. First of all, a high-level and user-friendly knowledge representation language has been designed. Second, a user-friendly acquisition tool, which provides expert physicians with various forms of help, has been implemented. Third, a tool for executing GL on a specific patient has been made available. At all the levels above, advanced AI techniques have been exploited, in order to enhance flexibility and user-friendliness and to provide decision support. Specifically, this chapter focuses on the methods we have developed in order to cope with (i) automatic resource-based adaptation of GL, (ii) representation and reasoning about temporal constraints in GL, (iii) decision making support, and (iv) model-based verification. We stress that, although we have devised such techniques within the GLARE project, they are mostly system-independent, so that they might be applied to other guideline management systems.

  4. Applied mathematics: Models, Discretizations, and Solvers

    Institute of Scientific and Technical Information of China (English)

    D.E. Keyes

    2007-01-01

    @@ Computational plasma physicists inherit decades of developments in mathematical models, numerical algorithms, computer architecture, and software engineering, whose recent coming together marks the beginning of a new era of large-scale simulation.

  5. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  6. Applying the community partnership approach to human biology research.

    Science.gov (United States)

    Ravenscroft, Julia; Schell, Lawrence M; Cole, Tewentahawih'tha'

    2015-01-01

    Contemporary human biology research employs a unique skillset for biocultural analysis. This skillset is highly appropriate for the study of health disparities because disparities result from the interaction of social and biological factors over one or more generations. Health disparities research almost always involves disadvantaged communities owing to the relationship between social position and health in stratified societies. Successful research with disadvantaged communities involves a specific approach, the community partnership model, which creates a relationship beneficial for researcher and community. Paramount is the need for trust between partners. With trust established, partners share research goals, agree on research methods and produce results of interest and importance to all partners. Results are shared with the community as they are developed; community partners also provide input on analyses and interpretation of findings. This article describes a partnership-based, 20 year relationship between community members of the Akwesasne Mohawk Nation and researchers at the University at Albany. As with many communities facing health disparity issues, research with Native Americans and indigenous peoples generally is inherently politicized. For Akwesasne, the contamination of their lands and waters is an environmental justice issue in which the community has faced unequal exposure to, and harm by environmental toxicants. As human biologists engage in more partnership-type research, it is important to understand the long term goals of the community and what is at stake so the research circle can be closed and 'helicopter' style research avoided.

  7. The Levels of Conceptual Interoperability Model: Applying Systems Engineering Principles to M&S

    CERN Document Server

    WANG, Wenguang; WANG, Weiping

    2009-01-01

    This paper describes the use of the Levels of Conceptual Interoperability Model (LCIM) as a framework for conceptual modeling and its descriptive and prescriptive uses. LCIM is applied to show its potential and shortcomings in the current simulation interoperability approaches, in particular the High Level Architecture (HLA) and Base Object Models (BOM). It emphasizes the need to apply rigorous engineering methods and principles and replace ad-hoc approaches.

  8. Applying Machine Trust Models to Forensic Investigations

    Science.gov (United States)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  9. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  10. Applying waste logistics modeling to regional planning

    Energy Technology Data Exchange (ETDEWEB)

    Holter, G.M.; Khawaja, A.; Shaver, S.R.; Peterson, K.L.

    1995-05-01

    Waste logistics modeling is a powerful analytical technique that can be used for effective planning of future solid waste storage, treatment, and disposal activities. Proper waste management is essential for preventing unacceptable environmental degradation from ongoing operations, and is also a critical part of any environmental remediation activity. Logistics modeling allows for analysis of alternate scenarios for future waste flowrates and routings, facility schedules, and processing or handling capacities. Such analyses provide an increased understanding of the critical needs for waste storage, treatment, transport, and disposal while there is still adequate lead time to plan accordingly. They also provide a basis for determining the sensitivity of these critical needs to the various system parameters. This paper discusses the application of waste logistics modeling concepts to regional planning. In addition to ongoing efforts to aid in planning for a large industrial complex, the Pacific Northwest Laboratory (PNL) is currently involved in implementing waste logistics modeling as part of the planning process for material recovery and recycling within a multi-city region in the western US.

  11. Support vector machine applied in QSAR modelling

    Institute of Scientific and Technical Information of China (English)

    MEI Hu; ZHOU Yuan; LIANG Guizhao; LI Zhiliang

    2005-01-01

    Support vector machine (SVM), partial least squares (PLS), and Back-Propagation artificial neural network (ANN) were employed to establish QSAR models of 2 dipeptide datasets. In order to validate predictive capabilities on external dataset of the resulting models, both internal and external validations were performed. The division of dataset into both training and test sets was carried out by D-optimal design. The results showed that support vector machine (SVM) behaved well in both calibration and prediction. For the dataset of 48 bitter tasting dipeptides (BTD), the results obtained by support vector regression (SVR) were superior to that by PLS in both calibration and prediction. When compared with BP artificial neural network, SVR showed less calibration power but more predictive capability. For the dataset of angiotensin-converting enzyme (ACE) inhibitors, the results obtained by support vector machine (SVM) regression were equivalent to those by PLS and BP artificial neural network. In both datasets, SVR using linear kernel function behaved well as that using radial basis kernel function. The results showed that there is wide prospect for the application of support vector machine (SVM) into QSAR modeling.

  12. The resilience approach to climate adaptation applied for flood risk

    NARCIS (Netherlands)

    Gersonius, B.

    2012-01-01

    This dissertation presents a potential way forward for adaptation to climate change, termed the resilience approach. This approach takes a dynamic perspective on adaptive processes and the effects of these processes at/across different spatio-temporal scales. Experience is provided with four methods

  13. Uncertainty models applied to the substation planning

    Energy Technology Data Exchange (ETDEWEB)

    Fontoura Filho, Roberto N. [ELETROBRAS, Rio de Janeiro, RJ (Brazil); Aires, Joao Carlos O.; Tortelly, Debora L.S. [Light Servicos de Eletricidade S.A., Rio de Janeiro, RJ (Brazil)

    1994-12-31

    The selection of the reinforcements for a power system expansion becomes a difficult task on an environment of uncertainties. These uncertainties can be classified according to their sources as exogenous and endogenous. The first one is associated to the elements of the generation, transmission and distribution systems. The exogenous uncertainly is associated to external aspects, as the financial resources, the time spent to build the installations, the equipment price and the load level. The load uncertainly is extremely sensible to the behaviour of the economic conditions. Although the impossibility to take out completely the uncertainty , the endogenous one can be convenient treated and the exogenous uncertainly can be compensated. This paper describes an uncertainty treatment methodology and a practical application to a group of substations belonging to LIGHT company, the Rio de Janeiro electric utility. The equipment performance uncertainty is treated by adopting a probabilistic approach. The uncertainly associated to the load increase is considered by using technical analysis of scenarios and choice criteria based on the Decision Theory. On this paper it was used the Savage Method and the Fuzzy Set Method, in order to select the best middle term reinforcements plan. (author) 7 refs., 4 figs., 6 tabs.

  14. Characterization of Chilean Higher Education Institutions (HEI): an approach towards a future model of university (Main reforms applied in the Latin-American higher education model that made an impact in the commercialization of the universities)

    OpenAIRE

    Carmona-López, Rafael Jaime; Toro-Jaramillo, Iván Darío; Riascos-Gonzales, José Antonio

    2014-01-01

    The research context that this article seeks to show the different models of college, from the evolution and development of universities and their management in Latin America, particularly in Chile, and how they affect the development of societies, both past and contemporary, highlighting some similar and different characteristics that they have. It should be noted that the Chilean model of education has been considered to have a great impact on some countries in Latin America, especially in ...

  15. Applying the luminosity function statistics in the fireshell model

    Science.gov (United States)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  16. Laser therapy applying the differential approaches and biophotometrical elements

    Science.gov (United States)

    Mamedova, F. M.; Akbarova, Ju. A.; Umarova, D. A.; Yudin, G. A.

    1995-04-01

    The aim of the present paper is the presentation of biophotometrical data obtained from various anatomic-topographical mouth areas to be used for the development of differential approaches to laser therapy in dentistry. Biophotometrical measurements were carried out using a portative biophotometer, as a portion of a multifunctional equipping system of laser therapy, acupuncture and biophotometry referred to as 'Aura-laser'. The results of biophotometrical measurements allow the implementation of differential approaches to laser therapy of parodontitis and mucous mouth tissue taking their clinic form and rate of disease into account.

  17. Applied Approaches of Rough Set Theory to Web Mining

    Institute of Scientific and Technical Information of China (English)

    SUN Tie-li; JIAO Wei-wei

    2006-01-01

    Rough set theory is a new soft computing tool, and has received much attention of researchers around the world. It can deal with incomplete and uncertain information. Now,it has been applied in many areas successfully. This paper introduces the basic concepts of rough set and discusses its applications in Web mining. In particular, some applications of rough set theory to intelligent information processing are emphasized.

  18. Dialogical Approach Applied in Group Counselling: Case Study

    Science.gov (United States)

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  19. Conflicts, development and natural resources : An applied game theoretic approach

    NARCIS (Netherlands)

    Wick, A.K.

    2008-01-01

    This thesis also provides a critical view on a part of preceding resource curse results, namely the negative association between resources and economic performance. Arguing that the empirical literature on the topic up until now has ignored serious econometric concerns, a different approach is offer

  20. A Multi-Agent System Approach Applied to Light Raycasting

    Directory of Open Access Journals (Sweden)

    H. Andrade

    2012-08-01

    Full Text Available Light and shadows caused by the interaction with objects are important features in computer graphics which areusually taken into account to achieve realistic images. In order to simulate them, some attempts have been carried outwhich are based on direct illumination classical approaches as shadow mapping and shadow volumes. However,classical approaches in their beginnings could not support semi-transparent objects, soft-shadows, light interactionsinside objects and the possibility to update a scene based on previous information.In this paper a novel shadow casting approach is proposed to solve the previously mentioned problem using aninteractive cooperative multi agent system to provide a better understanding and easy customization of the renderedscenes; for instance, the scenes are represented with object agents that propagate rectilinear photon informationthrough them causing several changes on photon properties such as wavelength, intensity, among others. Thissystem uses a two-dimensional space represented by pixels.Our multi-agent system (MAS uses a blackboard architecture for storing and sharing data and the implicit invocationdesign pattern. The system was developed to calculate direct illumination in a two-dimensional space. In addition, theproposed system supports point light agents, opaque agents, semi-opaque agents and empty agents.A comparison is presented between the classic approaches and the proposed one presented in this work in scenescomposed of opaque and semi-opaque objects. The proposed approach, as opposed to the classical ones, allows theshadows to be casted by the light that passes through semi-opaque objects. The light is casted by one or many lightagents producing hard and soft shadows.

  1. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  2. TCSC impedance regulator applied to the second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, J.P.; Dessaint, L.A. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Electrical Engineering; Champagne, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Software and IT Engineering; Pare, D. [Institut de Recherche d' Hydro-Quebec, Varennes, PQ (Canada)

    2008-07-01

    Due to the combination of electrical demand growth and the high cost of building new power transmission lines, series compensation is increasingly used in power systems all around the world. Series compensation has been proposed as a new way to transfer more power on existing lines. By adding series compensation to an existing line (a relatively small change), the power transfer can be increased significantly. One of the means used for line compensation is the addition of capacitive elements in series with the line. This paper presented a thyristor-controlled series capacitor (TCSC) model that used impedance as reference, had individual controls for each phase, included a linearization module and considered only the fundamental frequency for impedance computations, without using any filter. The model's dynamic behavior was validated by applying it to the second benchmark model for subsynchronous resonance (SSR). Simulation results from the proposed model, obtained using EMTP-RV and SimPowerSystems were demonstrated. It was concluded that SSR was mitigated by the proposed approach. 19 refs., 19 figs.

  3. Relative Binding Free Energy Calculations Applied to Protein Homology Models.

    Science.gov (United States)

    Cappel, Daniel; Hall, Michelle Lynn; Lenselink, Eelke B; Beuming, Thijs; Qi, Jun; Bradner, James; Sherman, Woody

    2016-12-27

    A significant challenge and potential high-value application of computer-aided drug design is the accurate prediction of protein-ligand binding affinities. Free energy perturbation (FEP) using molecular dynamics (MD) sampling is among the most suitable approaches to achieve accurate binding free energy predictions, due to the rigorous statistical framework of the methodology, correct representation of the energetics, and thorough treatment of the important degrees of freedom in the system (including explicit waters). Recent advances in sampling methods and force fields coupled with vast increases in computational resources have made FEP a viable technology to drive hit-to-lead and lead optimization, allowing for more efficient cycles of medicinal chemistry and the possibility to explore much larger chemical spaces. However, previous FEP applications have focused on systems with high-resolution crystal structures of the target as starting points-something that is not always available in drug discovery projects. As such, the ability to apply FEP on homology models would greatly expand the domain of applicability of FEP in drug discovery. In this work we apply a particular implementation of FEP, called FEP+, on congeneric ligand series binding to four diverse targets: a kinase (Tyk2), an epigenetic bromodomain (BRD4), a transmembrane GPCR (A2A), and a protein-protein interaction interface (BCL-2 family protein MCL-1). We apply FEP+ using both crystal structures and homology models as starting points and find that the performance using homology models is generally on a par with the results when using crystal structures. The robustness of the calculations to structural variations in the input models can likely be attributed to the conformational sampling in the molecular dynamics simulations, which allows the modeled receptor to adapt to the "real" conformation for each ligand in the series. This work exemplifies the advantages of using all-atom simulation methods with

  4. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical

  5. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio

  6. Model Oriented Approach for Industrial Software Development

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2015-01-01

    Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.

  7. Applying a User-Centered Approach to Interactive Visualisation Design

    Science.gov (United States)

    Wassink, Ingo; Kulyk, Olga; van Dijk, Betsy; van der Veer, Gerrit; van der Vet, Paul

    Analysing users in their context of work and finding out how and why they use different information resources is essential to provide interactive visualisation systems that match their goals and needs. Designers should actively involve the intended users throughout the whole process. This chapter presents a user-centered approach for the design of interactive visualisation systems. We describe three phases of the iterative visualisation design process: the early envisioning phase, the global specification phase, and the detailed specification phase. The whole design cycle is repeated until some criterion of success is reached. We discuss different techniques for the analysis of users, their tasks and domain. Subsequently, the design of prototypes and evaluation methods in visualisation practice are presented. Finally, we discuss the practical challenges in design and evaluation of collaborative visualisation environments. Our own case studies and those of others are used throughout the whole chapter to illustrate various approaches.

  8. Major accident prevention through applying safety knowledge management approach.

    Science.gov (United States)

    Kalatpour, Omid

    2016-01-01

    Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.

  9. The Case Study Approach: Some Theoretical, Methodological and Applied Considerations

    Science.gov (United States)

    2013-06-01

    a large manufacturing organisation in Malaysia . An in- depth case study process (specifically a qualitative approach) was used to illustrate the...researcher closely examined four leaders from generally diverse organisations, who had embraced the learning-organisation concept in order to improve...The researchers focused on the context of learning in the workplace , and they investigated the nature of learning and development opportunities that

  10. Applying Meta-Analysis to Structural Equation Modeling

    Science.gov (United States)

    Hedges, Larry V.

    2016-01-01

    Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines…

  11. Applied Ethnobotany: Participatory Approach for Community Development and Conservation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Applied ethnobotany is a nev subject in ethnobiological sciences referring to the transferring, reviving and cultivating ethnobotanical knowledge among different social groups within intra-and-inter-communities. Much research related to biodiversity in many countries is largely devoted to the gathering of more academic information, rather than to more incise studies focusing on finding answers to pressing challenges related to the use of plants by communities. China is a country possessing rich biodiversity and cultural diversity. The long history of Chinese traditional medicine, diversity of cultivated crops and utilization of wild plant species are great cultural traditions to the country. Today, many societies of the country are still intricately linked to the natural environment economically as well as societies and groups within China. However, China is facing major changes in modernization of the country's economy, and globalization to form part of the world exchange system. Increasingly high levels of consumptions of natural plants, as well as national and international trades on plant products have resulted, space in over-harvesting of wild resources and accelerated environmental degradation. Local social structures and cultural traditions have also changed in order to cope with policy changes. In this background, over the last decade, applied ethnobotany for rural community development and conservation has been employed in different field projects and ethnic minority communities in Yunnan province of China. The applied ethnobotany has focused on work at community level to achieve sustainable use of natural resources and conservation. This presentation discusses findings and lessons learned from the projects on alternatives and innovations to shifting cultivation in Xishuangbanna, southwestern China.

  12. Hamiltonian and Lagrangian Dynamical Matrix Approaches Applied to Magnetic Nanostructures

    Directory of Open Access Journals (Sweden)

    Roberto Zivieri

    2012-01-01

    Full Text Available Two micromagnetic tools to study the spin dynamics are reviewed. Both approaches are based upon the so-called dynamical matrix method, a hybrid micromagnetic framework used to investigate the spin-wave normal modes of confined magnetic systems. The approach which was formulated first is the Hamiltonian-based dynamical matrix method. This method, used to investigate dynamic magnetic properties of conservative systems, was originally developed for studying spin excitations in isolated magnetic nanoparticles and it has been recently generalized to study the dynamics of periodic magnetic nanoparticles. The other one, the Lagrangian-based dynamical matrix method, was formulated as an extension of the previous one in order to include also dissipative effects. Such dissipative phenomena are associated not only to intrinsic but also to extrinsic damping caused by injection of a spin current in the form of spin-transfer torque. This method is very accurate in identifying spin modes that become unstable under the action of a spin current. The analytical development of the system of the linearized equations of motion leads to a complex generalized Hermitian eigenvalue problem in the Hamiltonian dynamical matrix method and to a non-Hermitian one in the Lagrangian approach. In both cases, such systems have to be solved numerically.

  13. Applying a Common-Sense Approach to Fighting Obesity

    Directory of Open Access Journals (Sweden)

    Jessica Y. Breland

    2012-01-01

    Full Text Available The obesity epidemic is a threat to the health of millions and to the economic viability of healthcare systems, governments, businesses, and nations. A range of answers come to mind if and when we ask, “What can we, health professionals (physicians, nurses, nutritionists, behavioral psychologists, do about this epidemic?” In this paper, we describe the Common-Sense Model of Self-Regulation as a framework for organizing existent tools and creating new tools to improve control of the obesity epidemic. Further, we explain how the Common-Sense Model can augment existing behavior-change models, with particular attention to the strength of the Common-Sense Model in addressing assessment and weight maintenance beyond initial weight loss.

  14. Applying a Common-Sense Approach to Fighting Obesity

    Science.gov (United States)

    Breland, Jessica Y.; Fox, Ashley M.; Horowitz, Carol R.; Leventhal, Howard

    2012-01-01

    The obesity epidemic is a threat to the health of millions and to the economic viability of healthcare systems, governments, businesses, and nations. A range of answers come to mind if and when we ask, “What can we, health professionals (physicians, nurses, nutritionists, behavioral psychologists), do about this epidemic?” In this paper, we describe the Common-Sense Model of Self-Regulation as a framework for organizing existent tools and creating new tools to improve control of the obesity epidemic. Further, we explain how the Common-Sense Model can augment existing behavior-change models, with particular attention to the strength of the Common-Sense Model in addressing assessment and weight maintenance beyond initial weight loss. PMID:22811889

  15. Benefits Approach to Leisure Applied to Sustainable Tourism Development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The Benefits Approach to Leisure (BAL), recognized as a major and badly needed "paradigm shift" in the way we conceive of and manage recreation resources, is believed to play a big role in achieving the sustainable development of tourism industry. BAL has been proved to be effective and efficient in the field of leisure and tourism in terms of its three types of benefits and two stages of production. This paper, using Aqua Zoo in Friesland, the Netherlands as a case, investigates the significance of BAL to ...

  16. Finite element models applied in active structural acoustic control

    NARCIS (Netherlands)

    Oude Nijhuis, Marco H.H.; Boer, de André; Rao, Vittal S.

    2002-01-01

    This paper discusses the modeling of systems for active structural acoustic control. The finite element method is applied to model structures including the dynamics of piezoelectric sensors and actuators. A model reduction technique is presented to make the finite element model suitable for controll

  17. Applying generalized non deducibility on compositions (GNDC) approach in dependability

    NARCIS (Netherlands)

    Gnesi, Stefania; Lenzini, Gabriele; Martinelli, Fabio

    2004-01-01

    This paper presents a framework where dependable systems can be uniformly modeled and dependable properties analyzed within the Generalized Non Deducibility on Compositions (GNDC), a scheme that has been profitably used in definition and analysis of security properties. Precisely, our framework requ

  18. Fault Tolerant Robust Control Applied for Induction Motor (LMI approach

    Directory of Open Access Journals (Sweden)

    Hamouda KHECHINI

    2007-09-01

    Full Text Available This paper foregrounds fault tolerant robust control of uncertain dynamic linear systems in the state space representation. In fact, the industrial systems are more and more complex and the diagnosis process becomes indispensable to guarantee their surety of functioning and availability, that’s why a fault tolerant control law is imperative to achieve the diagnosis. In this paper, we address the problem of state feedback H2 /H∞ mixed with regional pole placement for linear continuous uncertain system. Sufficient conditions for feasibility are derived for a general class of convex regions of the complex plan. The conditions are presented as a collection of linear matrix inequalities (LMI 's. The efficiency and performance of this approach are then tested taking into consideration the robust control of a three- phase induction motor drive with the fluctuation of its parameters during the functioning.

  19. Finite strain response of crimped fibers under uniaxial traction: An analytical approach applied to collagen

    Science.gov (United States)

    Marino, Michele; Wriggers, Peter

    2017-01-01

    Composite materials reinforced by crimped fibers intervene in a number of advanced structural applications. Accordingly, constitutive equations describing their anisotropic behavior and explicitly accounting for fiber properties are needed for modeling and design purposes. To this aim, the finite strain response of crimped beams under uniaxial traction is herein addressed by obtaining analytical relationships based on the Principle of Virtual Works. The model is applied to collagen fibers in soft biological tissues, coupling geometric nonlinearities related to fiber crimp with material nonlinearities due to nanoscale mechanisms. Several numerical applications are presented, addressing the influence of geometric and material features. Available experimental data for tendons are reproduced, integrating the proposed approach within an optimization procedure for data fitting. The obtained results highlight the effectiveness of the proposed approach in correlating fibers structure with composite material mechanics.

  20. Applying open source innovation approaches in developing business innovation

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2015-01-01

    More and more companies are pursuing continuous innovation through different types of open source innovation and across different partners. The growing interest in open innovation (OI) originates both from the academic community as well as amongst practitioners motivating further investigation...... of effective OI and its different applications. One of the OI applications receiving growing attention is the use of innovation contests in developing business model innovations. However little theoretical knowledge exists and limited empirical research has been performed on OI and how this is facilitated...... and managed effectively in developing business model innovation. The aim of this paper is therefore to close this research gap and to provide new knowledge within the research field of OI and OI applications. Thus, in the present study we explore the facilitation and management of open source innovation...

  1. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  2. Applying electrical utility least-cost approach to transportation planning

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, G.A.; Growdon, K.; Lagerberg, B.

    1994-09-01

    Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.

  3. Introduction to semiconductor lasers for optical communications an applied approach

    CERN Document Server

    Klotzkin, David J

    2014-01-01

    This textbook provides a thorough and accessible treatment of semiconductor lasers from a design and engineering perspective. It includes both the physics of devices as well as the engineering, designing, and testing of practical lasers. The material is presented clearly with many examples provided. Readers of the book will come to understand the finer aspects of the theory, design, fabrication, and test of these devices and have an excellent background for further study of optoelectronics. This book also: ·         Provides a multi-faceted approach to explaining the theories behind semiconductor lasers, utilizing mathematical examples, illustrations, and written theoretical presentations ·         Offers a balance of relevant optoelectronic topics, with specific attention given to distributed feedback lasers, growth techniques, and waveguide cavity design ·         Provides a summary of every chapter, worked examples, and problems for readers to solve ·         Empasizes...

  4. Applying a cloud computing approach to storage architectures for spacecraft

    Science.gov (United States)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  5. Computational challenges of structure-based approaches applied to HIV.

    Science.gov (United States)

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  6. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  7. Applying the health action process approach (HAPA) to the choice of health products: An exploratory study

    DEFF Research Database (Denmark)

    Krutulyte, Rasa; Grunert, Klaus G.; Scholderer, Joachim;

    on the role of the behavioural intention predictors such as risk perception, outcome expectations and self-efficacy. The model has been proved to be a useful framework for understanding consumer choosing health food and is substantial in the further application of dietary choice issues.......This paper presents the results of a qualitative pilot study that aimed to uncovering Danish consumers' motives for choosing health food. Schwarzer's (1992) health action process approach (HAPA) was applied to understand the process by which people chose health products. The research focused...

  8. Applying the health action process approach (HAPA) to the choice of health products: An exploratory study

    DEFF Research Database (Denmark)

    Krutulyte, Rasa; Grunert, Klaus G.; Scholderer, Joachim

    This paper presents the results of a qualitative pilot study that aimed to uncovering Danish consumers' motives for choosing health food. Schwarzer's (1992) health action process approach (HAPA) was applied to understand the process by which people chose health products. The research focused...... on the role of the behavioural intention predictors such as risk perception, outcome expectations and self-efficacy. The model has been proved to be a useful framework for understanding consumer choosing health food and is substantial in the further application of dietary choice issues....

  9. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  10. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  11. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  12. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  13. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan;

    2013-01-01

    The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results...

  14. Semantic Approaches Applied to Scientific Ocean Drilling Data

    Science.gov (United States)

    Fils, D.; Jenkins, C. J.; Arko, R. A.

    2012-12-01

    The application of Linked Open Data methods to 40 years of data from scientific ocean drilling is providing users with several new methods for rich-content data search and discovery. Data from the Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP) and Integrated Ocean Drilling Program (IODP) have been translated and placed in RDF triple stores to provide access via SPARQL, linked open data patterns, and by embedded structured data through schema.org / RDFa. Existing search services have been re-encoded in this environment which allows the new and established architectures to be contrasted. Vocabularies including computed semantic relations between concepts, allow separate but related data sets to be connected on their concepts and resources even when they are expressed somewhat differently. Scientific ocean drilling produces a wide range of data types and data sets: borehole logging file-based data, images, measurements, visual observations and the physical sample data. The steps involved in connecting these data to concepts using vocabularies will be presented, including the connection of data sets through Vocabulary of Interlinked Datasets (VoID) and open entity collections such as Freebase and dbPedia. Demonstrated examples will include: (i) using RDF Schema for inferencing and in federated searches across NGDC and IODP data, (ii) using structured data in the data.oceandrilling.org web site, (iii) association through semantic methods of age models and depth recorded data to facilitate age based searches for data recorded by depth only.

  15. An extended master-equation approach applied to aggregation in freeway traffic

    Institute of Scientific and Technical Information of China (English)

    Li Jun-Wei; Lin Bo-Liang; Huang Yong-Chang

    2008-01-01

    We restudy the master-equation approach applied to aggregation in a one-dimensional freeway,where the decay transition probabilities for the jump processes are reconstructed based on a car-following model. According to the reconstructed transition probabilities,the clustering behaviours and the stochastic properties of the master equation in a one-lane freeway traffic model are investigated in detail.The numerical results show that the size of the clusters initially below the critical size of the unstable cluster and initially above that of the unstable cluster all enter the same stable state,which also accords with the nucleation theory and is known from the result in earlier work.Moreover,we have obtained more reasonable parameters of the master equation based on some results of cellular automata models.

  16. Dynamical real space renormalization group applied to sandpile models.

    Science.gov (United States)

    Ivashkevich, E V; Povolotsky, A M; Vespignani, A; Zapperi, S

    1999-08-01

    A general framework for the renormalization group analysis of self-organized critical sandpile models is formulated. The usual real space renormalization scheme for lattice models when applied to nonequilibrium dynamical models must be supplemented by feedback relations coming from the stationarity conditions. On the basis of these ideas the dynamically driven renormalization group is applied to describe the boundary and bulk critical behavior of sandpile models. A detailed description of the branching nature of sandpile avalanches is given in terms of the generating functions of the underlying branching process.

  17. Comparison of two multiaxial fatigue models applied to dental implants

    Directory of Open Access Journals (Sweden)

    JM. Ayllon

    2015-07-01

    Full Text Available This paper presents two multiaxial fatigue life prediction models applied to a commercial dental implant. One model is called Variable Initiation Length Model and takes into account both the crack initiation and propagation phases. The second model combines the Theory of Critical Distance with a critical plane damage model to characterise the initiation and initial propagation of micro/meso cracks in the material. This paper discusses which material properties are necessary for the implementation of these models and how to obtain them in the laboratory from simple test specimens. It also describes the FE models developed for the stress/strain and stress intensity factor characterisation in the implant. The results of applying both life prediction models are compared with experimental results arising from the application of ISO-14801 standard to a commercial dental implant.

  18. Applying the INN model to the MaxClique problem

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, T.

    1993-09-01

    Max-Clique is the problem of finding the largest clique in a given graph. It is not only NP-hard, but, as recent results suggest, even hard to approximate. Nevertheless it is still very important to develop and test practical algorithms that will find approximate solutions for the maximum clique problem on various graphs stemming from numerous applications. Indeed, many different types of algorithmic approaches are applied to that problem. Several neural networks and related algorithms were applied recently to combinatorial optimization problems in general and to the Max-Clique problem in particular. These neural nets are dynamical system which minimize a cost (or computational ``energy``) function that represents the optimization problem, the Max-Clique in our case. Therefore they all belong to the class of integer programming algorithms surveyed in the Pardalos and Xue review. The work presented here is a development and improvement of a neural network algorithm that was introduced recently. In the previous work, we have considered two Hopfield type neural networks, the INN and the HcN, and their application to the max-clique problem. In this paper, I concentrate on the INN network and present an improved version of the t-A algorithm that was introduced in. The rest of this paper is organized as follows: in section 2, I describe the INN model and how it implements a given graph. In section 3, it is characterized in terms of graph theory. In particular, the stable states of the network are mapped to the maximal cliques of its underling graph. In section 4, I present the t-Annealing algorithm and an improved version of it, the Adaptive t-Annealing. Several experiments done with these algorithms on benchmark graphs are reported in section 5, and the efficiency of the new algorithm is demonstrated. I conclude with a short discussion.

  19. Ontological Relations and the Capability Maturity Model Applied in Academia

    Science.gov (United States)

    de Oliveira, Jerônimo Moreira; Campoy, Laura Gómez; Vilarino, Lilian

    2015-01-01

    This work presents a new approach to the discovery, identification and connection of ontological elements within the domain of characterization in learning organizations. In particular, the study can be applied to contexts where organizations require planning, logic, balance, and cognition in knowledge creation scenarios, which is the case for the…

  20. New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach.

    Science.gov (United States)

    Pallagi, Edina; Karimi, Keyhaneh; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2016-09-10

    The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent. The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper. Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery.

  1. Applying meta-analysis to structural equation modeling.

    Science.gov (United States)

    Hedges, Larry V

    2016-06-01

    Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines structural coefficients and an indirect approach that first combines correlation matrices and estimates structural coefficients from the combined correlation matrix. When there is no heterogeneity across studies, direct estimates of structural coefficients from several studies is an appealing approach. Heterogeneity of correlation matrices across studies presents both practical and conceptual problems. An alternative approach to heterogeneity is suggested as an example of how to better handle heterogeneity in this context. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  3. Applying the ARCS Motivation Model in Technological and Vocational Education

    Science.gov (United States)

    Liao, Hung-Chang; Wang, Ya-huei

    2008-01-01

    This paper describes the incorporation of Keller's ARCS (Attention, Relevance, Confidence, and Satisfaction) motivation model into traditional classroom instruction-learning process. Viewing that technological and vocational students have low confidence and motivation in learning, the authors applied the ARCS motivation model not only in the…

  4. The HPT Model Applied to a Kayak Company's Registration Process

    Science.gov (United States)

    Martin, Florence; Hall, Herman A., IV; Blakely, Amanda; Gayford, Matthew C.; Gunter, Erin

    2009-01-01

    This case study describes the step-by-step application of the traditional human performance technology (HPT) model at a premier kayak company located on the coast of North Carolina. The HPT model was applied to address lost revenues related to three specific business issues: misinformed customers, dissatisfied customers, and guides not showing up…

  5. An applied general equilibrium model for Dutch agribusiness policy analysis.

    NARCIS (Netherlands)

    Peerlings, J.H.M.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.The model is fairly

  6. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  7. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    CERN Document Server

    Vlah, Zvonimir; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias

    2012-01-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dis...

  8. SPH method applied to high speed cutting modelling

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2007-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A Lagrangian smoothed particle hydrodynamics (SPH)- based model is arried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a "natural" workpiece/chip separation. The developed approach is compared to machining dedicated code results and experimental data. The SPH cutting...

  9. Applying Model Checking to Generate Model-Based Integration Tests from Choreography Models

    Science.gov (United States)

    Wieczorek, Sebastian; Kozyura, Vitaly; Roth, Andreas; Leuschel, Michael; Bendisposto, Jens; Plagge, Daniel; Schieferdecker, Ina

    Choreography models describe the communication protocols between services. Testing of service choreographies is an important task for the quality assurance of service-based systems as used e.g. in the context of service-oriented architectures (SOA). The formal modeling of service choreographies enables a model-based integration testing (MBIT) approach. We present MBIT methods for our service choreography modeling approach called Message Choreography Models (MCM). For the model-based testing of service choreographies, MCMs are translated into Event-B models and used as input for our test generator which uses the model checker ProB.

  10. The active analog approach applied to the pharmacophore identification of benzodiazepine receptor ligands

    Science.gov (United States)

    Tebib, Souhail; Bourguignon, Jean-Jacques; Wermuth, Camille-Georges

    1987-07-01

    Applied to seven potent benzodiazepine-receptor ligands belonging to chemically different classes, the active analog approach allowed the stepwise identification of the pharmacophoric pattern associated with the recognition by the benzodiazepine receptor. A unique pharmacophore model was derived which involves six critical zones: (a) a π-electron rich aromatic (PAR) zone; (b) two electron-rich zones δ1 and δ2 placed at 5.0 and 4.5 Å respectively from the reference centroid in the PAR zone; (c) a freely rotating aromatic ring (FRA) region; (d) an out-of-plane region (OPR), strongly associated with agonist properties; and (e) an additional hydrophobic region (AHR). The model accommodates all presently known ligands of the benzodiazepine receptor, identifies sensitivity to steric hindrance close to the δ1 zone, accounts for R and S differential affinities and distinguishes requirements for agonist versus non-agonist activity profiles.

  11. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  12. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  13. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  14. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik

    2016-01-01

    Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...

  15. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  16. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  17. Analysis of the institutional evaluation approach applied to the educational management of Costa Rica Christian School

    OpenAIRE

    Campos Campos, Ana Jenssie

    2011-01-01

    The following article corresponds to the synthesis of a research work about the analysis of the Institutional Evaluation Approach applied to the educational management of a private school in the San José Norte Educational Region. The objectives of the analysis lied in identifying theinstitutional evaluation approach from the characteristics of the three approaches proposed, as well as on determining the dimensions of the approach used, and a third objective was determining the staff perceptio...

  18. A novel optical calorimetry dosimetry approach applied to an HDR Brachytherapy source

    Science.gov (United States)

    Cavan, A.; Meyer, J.

    2013-06-01

    The technique of Digital Holographic Interferometry (DHI) is applied to the measurement of radiation absorbed dose distribution in water. An optical interferometer has been developed that captures the small variations in the refractive index of water due to the radiation induced temperature increase ΔT. The absorbed dose D is then determined with high temporal and spatial resolution using the calorimetric relation D=cΔT (where c is the specific heat capacity of water). The method is capable of time resolving 3D spatial calorimetry. As a proof-of-principle of the approach, a prototype DHI dosimeter was applied to the measurement of absorbed dose from a High Dose Rate (HDR) Brachytherapy source. Initial results are in agreement with modelled doses from the Brachyvision treatment planning system, demonstrating the viability of the system for high dose rate applications. Future work will focus on applying corrections for heat diffusion and geometric effects. The method has potential to contribute to the dosimetry of diverse high dose rate applications which require high spatial resolution such as microbeam radiotherapy (MRT) or small field proton beam dosimetry but may potentially also be useful for interface dosimetry.

  19. Hydraulic Modeling of Lock Approaches

    Science.gov (United States)

    2016-08-01

    cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two

  20. LP Approach to Statistical Modeling

    OpenAIRE

    Mukhopadhyay, Subhadeep; Parzen, Emanuel

    2014-01-01

    We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...

  1. Cellular Automata Models Applied to the Study of Landslide Dynamics

    Science.gov (United States)

    Liucci, Luisa; Melelli, Laura; Suteanu, Cristian

    2015-04-01

    Landslides are caused by complex processes controlled by the interaction of numerous factors. Increasing efforts are being made to understand the spatial and temporal evolution of this phenomenon, and the use of remote sensing data is making significant contributions in improving forecast. This paper studies landslides seen as complex dynamic systems, in order to investigate their potential Self Organized Critical (SOC) behavior, and in particular, scale-invariant aspects of processes governing the spatial development of landslides and their temporal evolution, as well as the mechanisms involved in driving the system and keeping it in a critical state. For this purpose, we build Cellular Automata Models, which have been shown to be capable of reproducing the complexity of real world features using a small number of variables and simple rules, thus allowing for the reduction of the number of input parameters commonly used in the study of processes governing landslide evolution, such as those linked to the geomechanical properties of soils. This type of models has already been successfully applied in studying the dynamics of other natural hazards, such as earthquakes and forest fires. The basic structure of the model is composed of three modules: (i) An initialization module, which defines the topographic surface at time zero as a grid of square cells, each described by an altitude value; the surface is acquired from real Digital Elevation Models (DEMs). (ii) A transition function, which defines the rules used by the model to update the state of the system at each iteration. The rules use a stability criterion based on the slope angle and introduce a variable describing the weakening of the material over time, caused for example by rainfall. The weakening brings some sites of the system out of equilibrium thus causing the triggering of landslides, which propagate within the system through local interactions between neighboring cells. By using different rates of

  2. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  3. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  4. Mathematical models applied in inductive non-destructive testing

    Energy Technology Data Exchange (ETDEWEB)

    Wac-Wlodarczyk, A.; Goleman, R.; Czerwinski, D. [Technical University of Lublin, 20 618 Lublin, Nadbystrzycka St 38a (Poland); Gizewski, T. [Technical University of Lublin, 20 618 Lublin, Nadbystrzycka St 38a (Poland)], E-mail: t.gizewski@pollub.pl

    2008-10-15

    Non-destructive testing are the wide group of investigative methods of non-homogenous material. Methods of computer tomography, ultrasonic, magnetic and inductive methods still developed are widely applied in industry. In apparatus used for non-destructive tests, the analysis of signals is made on the basis of complex system answers. The answer is linearized due to the model of research system. In this paper, the authors will discuss the applications of the mathematical models applied in investigations of inductive magnetic materials. The statistical models and other gathered in similarity classes will be taken into consideration. Investigation of mathematical models allows to choose the correct method, which in consequence leads to precise representation of the inner structure of examined object. Inductive research of conductive media, especially those with ferromagnetic properties, are run with high frequency magnetic field (eddy-currents method), which considerably decrease penetration depth.

  5. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  6. Transtheoretical Model of Health Behavior Change Applied to Voice Therapy

    OpenAIRE

    2007-01-01

    Studies of patient adherence to health behavior programs, such as physical exercise, smoking cessation, and diet, have resulted in the formulation and validation of the Transtheoretical Model (TTM) of behavior change. Although widely accepted as a guide for the development of health behavior interventions, this model has not been applied to vocal rehabilitation. Because resolution of vocal difficulties frequently depends on a patient’s ability to make changes in vocal and health behaviors, th...

  7. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    OpenAIRE

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-01-01

    In this work we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, $E_0$, which controls the size of these clusters, such that $E_0=1$ is the Metropolis algorithm and $E_0=0$ regains the Wolff algorithm, for the Potts model. For $-1

  8. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.

  9. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  10. Knowledge Growth: Applied Models of General and Individual Knowledge Evolution

    Science.gov (United States)

    Silkina, Galina Iu.; Bakanova, Svetlana A.

    2016-01-01

    The article considers the mathematical models of the growth and accumulation of scientific and applied knowledge since it is seen as the main potential and key competence of modern companies. The problem is examined on two levels--the growth and evolution of objective knowledge and knowledge evolution of a particular individual. Both processes are…

  11. Remarks on orthotropic elastic models applied to wood

    Directory of Open Access Journals (Sweden)

    Nilson Tadeu Mascia

    2006-09-01

    Full Text Available Wood is generally considered an anisotropic material. In terms of engineering elastic models, wood is usually treated as an orthotropic material. This paper presents an analysis of two principal anisotropic elastic models that are usually applied to wood. The first one, the linear orthotropic model, where the material axes L (Longitudinal, R( radial and T(tangential are coincident with the Cartesian axes (x, y, z, is more accepted as wood elastic model. The other one, the cylindrical orthotropic model is more adequate of the growth caracteristics of wood but more mathematically complex to be adopted in practical terms. Specifically due to its importance in wood elastic parameters, this paper deals with the fiber orientation influence in these models through adequate transformation of coordinates. As a final result, some examples of the linear model, which show the variation of elastic moduli, i.e., Young´s modulus and shear modulus, with fiber orientation are presented.

  12. The DPSIR approach applied to marine eutrophication in LCIA as a learning tool

    DEFF Research Database (Denmark)

    Cosme, Nuno Miguel Dias; Olsen, Stig Irving

    the State (S) of the ecosystem, causing the Impacts (I) on these, and contributing to the management strategies and Responses ®. The latter are designed to modify the drivers, minimise the pressures and restore the state of the receiving ecosystem. In our opinion the DPSIR provides a good conceptual...... understanding that is well suited for sustainability teaching and communication purposes. Life Cycle Impact Assessment (LCIA) indicators aim at modelling the P-S-I parts and provide a good background for understanding D and R. As an example, the DPSIR approach was applied to the LCIA indicator marine...... assessment and response design ultimately benefit from spatial differentiation in the results. DPSIR based on LCIA seems a useful tool to improve communication and learning, as it bridges science and management while promoting the basic elements of sustainable development in a practical educational...

  13. Applying Particle Tracking Model In The Coastal Modeling System

    Science.gov (United States)

    2011-01-01

    Rev. 8-98) Prescribed by ANSI Std Z39-18 ERDC/CHL CHETN-IV-78 January 2011 2 Figure 1. CMS domain, grid, and bathymetry . CMS-Flow is driven by...through the simulation. At the end of the simulation, about 65 percent of the released clay particles are considered “ dead ,” ERDC/CHL CHETN-IV-78 January...2011 11 which means that they are either permanently buried at the sea bed or have moved out of the model domain. Figure 11. Specifications of

  14. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    Science.gov (United States)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  15. Approaches to Modeling of Recrystallization

    Directory of Open Access Journals (Sweden)

    Håkan Hallberg

    2011-10-01

    Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.

  16. A procedure for Applying a Maturity Model to Process Improvement

    Directory of Open Access Journals (Sweden)

    Elizabeth Pérez Mergarejo

    2014-09-01

    Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.

  17. Predictive control applied to an evaporator mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Alonso Giraldo Giraldo

    2010-07-01

    Full Text Available This paper outlines designing a predictive control model (PCM applied to a mathematical model of a falling film evaporator with mechanical steam compression like those used in the dairy industry. The controller was designed using the Connoisseur software package and data gathered from the simulation of a non-linear mathematical model. A control law was obtained from minimising a cost function sublect to dynamic system constraints, using a quadratic programme (QP algorithm. A linear programming (LP algorithm was used for finding a sub-optimal operation point for the process in stationary state.

  18. Opto-physiological modeling applied to photoplethysmographic cardiovascular assessment.

    Science.gov (United States)

    Hu, Sijung; Azorin-Peris, Vicente; Zheng, Jia

    2013-01-01

    This paper presents opto-physiological (OP) modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG). Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

  19. Opto-Physiological Modeling Applied to Photoplethysmographic Cardiovascular Assessment

    Directory of Open Access Journals (Sweden)

    Sijung Hu

    2013-01-01

    Full Text Available This paper presents opto-physiological (OP modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG. Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

  20. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    Science.gov (United States)

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  1. Flipped Classroom Adapted to the ARCS Model of Motivation and Applied to a Physics Course

    Science.gov (United States)

    Asiksoy, Gülsüm; Özdamli, Fezile

    2016-01-01

    This study aims to determine the effect on the achievement, motivation and self-sufficiency of students of the flipped classroom approach adapted to Keller's ARCS (Attention, Relevance, Confidence and Satisfaction) motivation model and applied to a physics course. The study involved 66 students divided into two classes of a physics course. The…

  2. Size-specific sensitivity: Applying a new structured population model

    Energy Technology Data Exchange (ETDEWEB)

    Easterling, M.R.; Ellner, S.P.; Dixon, P.M.

    2000-03-01

    Matrix population models require the population to be divided into discrete stage classes. In many cases, especially when classes are defined by a continuous variable, such as length or mass, there are no natural breakpoints, and the division is artificial. The authors introduce the integral projection model, which eliminates the need for division into discrete classes, without requiring any additional biological assumptions. Like a traditional matrix model, the integral projection model provides estimates of the asymptotic growth rate, stable size distribution, reproductive values, and sensitivities of the growth rate to changes in vital rates. However, where the matrix model represents the size distributions, reproductive value, and sensitivities as step functions (constant within a stage class), the integral projection model yields smooth curves for each of these as a function of individual size. The authors describe a method for fitting the model to data, and they apply this method to data on an endangered plant species, northern monkshood (Aconitum noveboracense), with individuals classified by stem diameter. The matrix and integral models yield similar estimates of the asymptotic growth rate, but the reproductive values and sensitivities in the matrix model are sensitive to the choice of stage classes. The integral projection model avoids this problem and yields size-specific sensitivities that are not affected by stage duration. These general properties of the integral projection model will make it advantageous for other populations where there is no natural division of individuals into stage classes.

  3. Agrochemical fate models applied in agricultural areas from Colombia

    Science.gov (United States)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  4. Scientific Theories, Models and the Semantic Approach

    Directory of Open Access Journals (Sweden)

    Décio Krause

    2007-12-01

    Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.

  5. Surface-bounded growth modeling applied to human mandibles

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt

    1999-01-01

    This thesis presents mathematical and computational techniques for three dimensional growth modeling applied to human mandibles. The longitudinal shape changes make the mandible a complex bone. The teeth erupt and the condylar processes change direction, from pointing predominantly backward...... to yield a spatially dense field. Different methods for constructing the sparse field are compared. Adaptive Gaussian smoothing is the preferred method since it is parameter free and yields good results in practice. A new method, geometry-constrained diffusion, is used to simplify The most successful...... growth model is linear and based on results from shape analysis and principal component analysis. The growth model is tested in a cross validation study with good results. The worst case mean modeling error in the cross validation study is 3.7 mm. It occurs when modeling the shape and size of a 12 years...

  6. Model Driven Mutation Applied to Adaptative Systems Testing

    CERN Document Server

    Bartel, Alexandre; Munoz, Freddy; Klein, Jacques; Mouelhi, Tejeddine; Traon, Yves Le

    2012-01-01

    Dynamically Adaptive Systems modify their behav- ior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical sys- tems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correct- ness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary resul...

  7. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    Science.gov (United States)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  8. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  9. A Monte Carlo approach applied to ultrasonic non-destructive testing

    Science.gov (United States)

    Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.

    2012-04-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface

  10. Nonstandard Analysis Applied to Advanced Undergraduate Mathematics - Infinitesimal Modeling

    OpenAIRE

    Herrmann, Robert A.

    2003-01-01

    This is a Research and Instructional Development Project from the U. S. Naval Academy. In this monograph, the basic methods of nonstandard analysis for n-dimensional Euclidean spaces are presented. Specific rules are deveoped and these methods and rules are applied to rigorous integral and differential modeling. The topics include Robinson infinitesimals, limited and infinite numbers; convergence theory, continuity, *-transfer, internal definition, hyprefinite summation, Riemann-Stieltjes int...

  11. Mathematical modelling applied to LiDAR data

    Directory of Open Access Journals (Sweden)

    Javier Estornell

    2013-06-01

    Full Text Available The aim of this article is to explain the application of several mathematic calculations to LiDAR (Light Detection And Ranging data to estimate vegetation parameters and modelling the relief of a forest area in the town of Chiva (Valencia. To represent the surface that describes the topography of the area, firstly, morphological filters were applied iteratively to select LiDAR ground points. From these data, the Triangulated Irregular Network (TIN structure was applied to model the relief of the area. From LiDAR data the canopy height model (CHM was also calculated. This model allowed obtaining bare soil, shrub and tree vegetation mapping in the study area. In addition, biomass was estimated from measurements taken in the field in 39 circular plots of radius 0.5 m and the 95th percentile of the LiDAR height datanincluded in each plot. The results indicated a high relationship between the two variables (measurednbiomass and 95th percentile with a coeficient of determination (R2 of 0:73. These results reveal the importance of using mathematical modelling to obtain information of the vegetation and land relief from LiDAR data.

  12. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  13. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  14. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  15. Evaluation of Jumping and Creeping Regularization Approaches Applied to 3D Seismic Tomography

    Science.gov (United States)

    Liu, M.; Ramachandran, K.

    2011-12-01

    Regularization deals with the ill-posedness of the inverse problem. The under-determined part of the problem is controlled by providing a priori knowledge on the physical solution in the form of additional constraints that the solution must satisfy. The final model is constrained to fit the data and also to satisfy some additional property. In seismic tomography, this property is selected such that the final model is as smooth as possible. This concept is physically meaningful as smooth models are sought that include only structure that is required to fit the data according to its uncertainty. The motivation for seeking a smooth model is that features present in the model should be essential to match the observations. Such a class of models is referred to as minimum structure models. The amount of structure in the estimated model parameters is measured in terms of roughness. In seismic tomography, second spatial derivatives are generally employed to quantify the model roughness. In this kind of regularized inversion, an objective function is minimized which includes norms that measure model roughness and data misfit. A tradeoff parameter is selected that provides the model with the least structure for a given level of data misfit. The regularized inverse problem that solves for model perturbation and also constrains perturbation flatness or smoothness during the inversion is known as creeping approach. The disadvantage of the creeping approach is that the final model will have no special properties and will be just a sum of smooth deviations added to the starting model. The regularized inverse problem that solves for model perturbation and also constrains model properties during the inversion is known as creeping approach. In the jumping approach, the final model can be constructed to have properties such as flatness or smoothness, since the regularization implements smoothing constraints on the model and not on the perturbation. The jumping and creeping approaches

  16. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    Science.gov (United States)

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  17. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    Science.gov (United States)

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others.

  18. Apply a hydrological model to estimate local temperature trends

    Science.gov (United States)

    Igarashi, Masao; Shinozawa, Tatsuya

    2014-03-01

    Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.

  19. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    CERN Document Server

    Vlah, Zvonimir; Okumura, Teppei; Desjacques, Vincent

    2013-01-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k<0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use perturbation theory (PT) and halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k~0.15h/Mpc at z=0, without the need to have fr...

  20. Cellular systems biology profiling applied to cellular models of disease.

    Science.gov (United States)

    Giuliano, Kenneth A; Premkumar, Daniel R; Strock, Christopher J; Johnston, Patricia; Taylor, Lansing

    2009-11-01

    Building cellular models of disease based on the approach of Cellular Systems Biology (CSB) has the potential to improve the process of creating drugs as part of the continuum from early drug discovery through drug development and clinical trials and diagnostics. This paper focuses on the application of CSB to early drug discovery. We discuss the integration of protein-protein interaction biosensors with other multiplexed, functional biomarkers as an example in using CSB to optimize the identification of quality lead series compounds.

  1. Applying OWA operator to model group behaviors in uncertain QFD

    OpenAIRE

    2013-01-01

    It is a crucial step to derive the priority order of design requirements (DRs) from customer requirements (CRs) in quality function deployment (QFD). However, it is not straightforward to prioritize DRs due to two types of uncertainties: human subjective perception and user variability. This paper proposes an OWA based group decision-making approach to uncertain QFD with an application to a flexible manufacturing system design. The proposed model performs computations solely based on the orde...

  2. Simple predictive electron transport models applied to sawtoothing plasmas

    Science.gov (United States)

    Kim, D.; Merle, A.; Sauter, O.; Goodman, T. P.

    2016-05-01

    In this work, we introduce two simple transport models to evaluate the time evolution of electron temperature and density profiles during sawtooth cycles (i.e. over a sawtooth period time-scale). Since the aim of these simulations is to estimate reliable profiles within a short calculation time, two simplified ad-hoc models have been developed. The goal for these models is to rely on a few easy-to-check free parameters, such as the confinement time scaling factor and the profiles’ averaged scale-lengths. Due to the simplicity and short calculation time of the models, it is expected that these models can also be applied to real-time transport simulations. We show that it works well for Ohmic and EC heated L- and H-mode plasmas. The differences between these models are discussed and we show that their predictive capabilities are similar. Thus only one model is used to reproduce with simulations the results of sawtooth control experiments on the TCV tokamak. For the sawtooth pacing, the calculated time delays between the EC power off and sawtooth crash time agree well with the experimental results. The map of possible locking range is also well reproduced by the simulation.

  3. [Applying multilevel models in evaluation of bioequivalence (I)].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Chen, Feng; Li, Xiao-song; Yang, Min

    2009-12-01

    This study aims to explore the application value of multilevel models for bioequivalence evaluation. Using a real example of 2 x 4 cross-over experimental design in evaluating bioequivalence of antihypertensive drug, this paper explores complex variance components corresponding to criteria statistics in existing methods recommended by FDA but obtained in multilevel models analysis. Results are compared with those from FDA standard Method of Moments, specifically on the feasibility and applicability of multilevel models in directly assessing the bioequivalence (ABE), the population bioequivalence (PBE) and the individual bioequivalence (IBE). When measuring ln (AUC), results from all variance components of the test and reference groups such as total variance (sigma(TT)(2) and sigma(TR)(2)), between-subject variance (sigma(BT)(2) and sigma(BR)(2)) and within-subject variance (sigma(WT)(2) and sigma(WR)(2)) estimated by simple 2-level models are very close to those that using the FDA Method of Moments. In practice, bioequivalence evaluation can be carried out directly by multilevel models, or by FDA criteria, based on variance components estimated from multilevel models. Both approaches produce consistent results. Multilevel models can be used to evaluate bioequivalence in cross-over test design. Compared to FDA methods, this one is more flexible in decomposing total variance into sub components in order to evaluate the ABE, PBE and IBE. Multilevel model provides a new way into the practice of bioequivalence evaluation.

  4. Applying a Dynamic Resource Supply Model in a Smart Grid

    Directory of Open Access Journals (Sweden)

    Kaiyu Wan

    2014-09-01

    Full Text Available Dynamic resource supply is a complex issue to resolve in a cyber-physical system (CPS. In our previous work, a resource model called the dynamic resource supply model (DRSM has been proposed to handle resources specification, management and allocation in CPS. In this paper, we are integrating the DRSM with service-oriented architecture and applying it to a smart grid (SG, one of the most complex CPS examples. We give the detailed design of the SG for electricity charging request and electricity allocation between plug-in hybrid electric vehicles (PHEV and DRSM through the Android system. In the design, we explain a mechanism for electricity consumption with data collection and re-allocation through ZigBee network. In this design, we verify the correctness of this resource model for expected electricity allocation.

  5. Dynamic Decision Making for Graphical Models Applied to Oil Exploration

    CERN Document Server

    Martinelli, Gabriele; Hauge, Ragnar

    2012-01-01

    We present a framework for sequential decision making in problems described by graphical models. The setting is given by dependent discrete random variables with associated costs or revenues. In our examples, the dependent variables are the potential outcomes (oil, gas or dry) when drilling a petroleum well. The goal is to develop an optimal selection strategy that incorporates a chosen utility function within an approximated dynamic programming scheme. We propose and compare different approximations, from simple heuristics to more complex iterative schemes, and we discuss their computational properties. We apply our strategies to oil exploration over multiple prospects modeled by a directed acyclic graph, and to a reservoir drilling decision problem modeled by a Markov random field. The results show that the suggested strategies clearly improve the simpler intuitive constructions, and this is useful when selecting exploration policies.

  6. Curve Fitting And Interpolation Model Applied In Nonel Dosage Detection

    Directory of Open Access Journals (Sweden)

    Jiuling Li

    2013-06-01

    Full Text Available The Curve Fitting and Interpolation Model are applied in Nonel dosage detection in this paper firstly, and the gray of continuous explosive in the Nonel has been forecasted. Although the traditional infrared equipment establishes the relationship of explosive dosage and light intensity, but the forecast accuracy is very low. Therefore, gray prediction models based on curve fitting and interpolation are framed separately, and the deviations from the different models are compared. Simultaneously, combining on the sample library features, the cubic polynomial fitting curve of the higher precision is used to predict grays, and 5mg-28mg Nonel gray values are calculated by MATLAB. Through the predictive values, the dosage detection operations are simplified, and the defect missing rate of the Nonel are reduced. Finally, the quality of Nonel is improved.

  7. Remote sensing applied to numerical modelling. [water resources pollution

    Science.gov (United States)

    Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.

    1975-01-01

    Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.

  8. Three-Dimensional Gravity Model Applied to Underwater Navigation

    Institute of Scientific and Technical Information of China (English)

    YAN Lei; FENG Hao; DENG Zhongliang; GAO Zhengbing

    2004-01-01

    At present, new integrated navigation, which usesthe location function of reference gravity anomaly map to control the errors of the inertial navigation system (INS), has been developed in marine navigation. It is named the gravityaided INS. Both the INS and real-time computation of gravity anomalies need a 3-D marine normal gravity model.Conventionally, a reduction method applied in geophysical survey is directly introduced to observed data processing. This reduction does not separate anomaly from normal gravity in the observed data, so errors cannot be avoided. The 3-D marine normal gravity model was derived from the J2 gravity model, and is suitable for the region whose depth is less than 1000 m.

  9. Systematic care management: a comprehensive approach to catastrophic injury management applied to a catastrophic burn injury population--clinical, utilization, economic, and outcome data in support of the model.

    Science.gov (United States)

    Kucan, John; Bryant, Ernest; Dimick, Alan; Sundance, Paula; Cope, Nathan; Richards, Reginald; Anderson, Chris

    2010-01-01

    The new standard for successful burn care encompasses both patient survival and the burn patient's long-term quality of life. To provide optimal long-term recovery from catastrophic injuries, including catastrophic burns, an outcome-based model using a new technology called systematic care management (SCM) has been developed. SCM provides a highly organized system of management throughout the spectrum of care that provides access to outcome data, consistent oversight, broader access to expert providers, appropriate allocation of resources, and greater understanding of total costs. Data from a population of 209 workers' compensation catastrophic burn cases with a mean TBSA of 27.9% who were managed under the SCM model of care were analyzed. The data include treatment type, cost, return to work, and outcomes achieved. Mean duration of management to achieve all guaranteed outcomes was 20 months. Of the 209 injured workers, 152 (72.7%) achieved sufficient recovery to be released to return to work, of which 97 (46.8%) were both released and competitively employed. Assessment of 10 domains of functional independence indicated that 47.2% of injured workers required total assistance at initiation of SCM. However, at termination of SCM, 84% of those injured workers were fully independent in the 10 functional activities. When compared with other burn research outcome data, the results support the value of the SCM model of care.

  10. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma.......We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  11. Active lubrication applied to radial gas journal bearings. Part 2: Modelling improvement and experimental validation

    DEFF Research Database (Denmark)

    Pierart, Fabián G.; Santos, Ilmar F.

    2016-01-01

    Actively-controlled lubrication techniques are applied to radial gas bearings aiming at enhancing one of their most critical drawbacks, their lack of damping. A model-based control design approach is presented using simple feedback control laws, i.e. proportional controllers. The design approach...... by finite element method and the global model is used as control design tool. Active lubrication allows for significant increase in damping factor of the rotor-bearing system. Very good agreement between theory and experiment is obtained, supporting the multi-physic design tool developed....

  12. Chemical, spectroscopic, and ab initio modelling approach to interfacial reactivity applied to anion retention by siderite; Approche couplee chimique, spectroscopique et de modelisation ab initio a la reactivite de surface: application a la retention des anions par la siderite

    Energy Technology Data Exchange (ETDEWEB)

    Badaut, V.

    2010-07-15

    Among the many radionuclides contained in high-level nuclear waste, {sup 79}Se was identified as a potential threat to the safety of long term underground storage. However, siderite (FeCO{sub 3}) is known to form upon corrosion of the waste container, and the impact of this mineral on the fate of selenium was not accounted for. In this work, the interactions between selenium oxyanions - selenate and selenite - and siderite were investigated. To this end, both experimental characterizations (solution chemistry, X-ray Absorption Spectroscopy - XAS) and theoretical studies (ab initio modelling using Density Functional Theory - DFT) were performed. Selenite and selenate ({<=} 10{sup 3} M) retention experiments by siderite suspensions (75 g/L) at neutral pH in reducing glovebox (5 % H{sub 2}) showed that selenite is quantitatively immobilized by siderite after 48 h of reaction time, when selenate is only partly immobilized after 10 days. In the selenite case, XAS showed that immobilized selenium is initially present as Se(IV) probably sorbed on siderite surface. After 10 days of reaction, selenite ions are quantitatively reduced and form poorly crystalline elementary selenium. Selenite retention and reduction kinetics are therefore distinct. On the other hand, the fraction of immobilized selenate retained in the solid fraction does not appear to be significantly reduced over the probed timescale (10 days). For a better understanding of the reduction mechanism of selenite ions by siderite, the properties of bulk and perfect surfaces of siderite were modelled using DFT. We suggest that the properties of the valence electrons can be correctly described only if the symmetry of the fundamental state electronic density is lower than the experimental crystallographic symmetry. We then show that the retention of simple molecules as O{sub 2} or H{sub 2}O on siderite and magnesite (10{sup -14}) perfect surfaces (perfect cleavage plane, whose surface energy is the lowest according

  13. Applying Mechanistic Dam Breach Models to Historic Levee Breaches

    Directory of Open Access Journals (Sweden)

    Risher Paul

    2016-01-01

    Full Text Available Hurricane Katrina elevated levee risk in the US national consciousness, motivating agencies to assess and improve their levee risk assessment methodology. Accurate computation of the flood flow magnitude and timing associated with a levee breach remains one of the most difficult and uncertain components of levee risk analysis. Contemporary methods are largely empirical and approximate, introducing substantial uncertainty to the damage and life loss models. Levee breach progressions are often extrapolated to the final width and breach formation time based on limited experience with past breaches or using regression equations developed from a limited data base of dam failures. Physically based embankment erosion models could improve levee breach modeling. However, while several mechanistic embankment breach models are available, they were developed for dams. Several aspects of the levee breach problem are distinct, departing from dam breach assumptions. This study applies three embankments models developed for dam breach analysis (DL Breach, HR BREACH, and WinDAM C to historic levee breaches with observed (or inferred breach rates, assessing the limitations, and applicability of each model to the levee breach problem.

  14. Model-free kinetics applied to sugarcane bagasse combustion

    Energy Technology Data Exchange (ETDEWEB)

    Ramajo-Escalera, B.; Espina, A.; Garcia, J.R. [Department of Organic and Inorganic Chemistry, University of Oviedo, 33006 Oviedo (Spain); Sosa-Arnao, J.H. [Mechanical Engineering Faculty, State University of Campinas (UNICAMP), P.O. Box 6122, 13083-970 Campinas, SP (Brazil); Nebra, S.A. [Interdisciplinary Center of Energy Planning, State University of Campinas (UNICAMP), R. Shigeo Mori 2013, 13083-770 Campinas, SP (Brazil)

    2006-09-15

    Vyazovkin's model-free kinetic algorithms were applied to determine conversion, isoconversion and apparent activation energy to both dehydration and combustion of sugarcane bagasse. Three different steps were detected with apparent activation energies of 76.1+/-1.7, 333.3+/-15.0 and 220.1+/-4.0kJ/mol in the conversion range of 2-5%, 15-60% and 70-90%, respectively. The first step is associated with the endothermic process of drying and release of water. The others correspond to the combustion (and carbonization) of organic matter (mainly cellulose, hemicellulose and lignin) and the combustion of the products of pyrolysis. (author)

  15. Schwinger boson approach to the fully screened Kondo model.

    Science.gov (United States)

    Rech, J; Coleman, P; Zarand, G; Parcollet, O

    2006-01-13

    We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.

  16. Mathematical modeling applied to the left ventricle of heart

    CERN Document Server

    Ranjbar, Saeed

    2014-01-01

    Background: How can mathematics help us to understand the mechanism of the cardiac motion? The best known approach is to take a mathematical model of the fibered structure, insert it into a more-or-less complex model of cardiac architecture, and then study the resulting fibers of activation that propagate through the myocardium. In our paper, we have attempted to create a novel software capable of demonstrate left ventricular (LV) model in normal hearts. Method: Echocardiography was performed on 70 healthy volunteers. Data evaluated included: velocity (radial, longitudinal, rotational and vector point), displacement (longitudinal and rotational), strain rate (longitudinal and circumferential) and strain (radial, longitudinal and circumferential) of all 16 LV myocardial segments. Using these data, force vectors of myocardial samples were estimated by MATLAB software, interfaced in the echocardiograph system. Dynamic orientation contraction (through the cardiac cycle) of every individual myocardial fiber could ...

  17. A forward modeling approach for interpreting impeller flow logs.

    Science.gov (United States)

    Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T

    2010-01-01

    A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.

  18. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...

  19. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    that provide a better resolution and therefore a more accurate segmentation. The imaging modalities used for comparison are scanning electron microscopy, optical microscopy and synchrotron CT. The validation methods are applied to the asses the segmentation of individual fibres from X-ray microtomograms.......We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  20. A Geometric Approach to Diagnosis Applied to A Ship Propulsion Problem

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    A geometric approach to FDI diagnosis for input-affine nonlinear systems is briefly described and applied to a ship propulsion benchmark. The analysis method is used to examine the possibility of detecting and isolating predefined faults in the system. The considered faults cover sensor, actuator...

  1. A semiparametric approach to physiological flow models.

    Science.gov (United States)

    Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R

    1989-08-01

    By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.

  2. Applying a realistic evaluation model to occupational safety interventions

    DEFF Research Database (Denmark)

    Pedersen, Louise Møller

    2017-01-01

    of occupational safety interventions. Conclusion: The revised realistic evaluation model can help safety science forward in identifying key factors for the success of occupational safety interventions. However, future research should strengthen the link between the immediate intervention results and outcome.......Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal...... characteristics of key actors (defined mechanisms), and the interplay between them, and can be categorized as expected or unexpected. However, little is known about ’how’ to include context and mechanisms in evaluations of intervention effectiveness. A revised realistic evaluation model has been introduced...

  3. Nature preservation acceptance model applied to tanker oil spill simulations

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2003-01-01

    is exemplified by a study of oil spills due to simulated tanker collisions in the Danish straits. It is found that the distribution of the oil spill volume per spill is well represented by an exponential distribution both in Oeresund and in Great Belt. When applied in the Poisson model, a risk profile reasonably...... close to the standard lognormal profile is obtained. Moreover, based on data pairs (volume, cost) for world wide oil spills it is inferred that the conditional distribution of the costs given the spill volume is well modeled by a lognormal distribution. By unconditioning by the exponential distribution...... of the single oil spill, a risk profile for the costs is obtained that is indistinguishable from the standard lognormal risk profile.Finally the question of formulating a public risk acceptance criterion is addressed following Ditlevsen, and it is argued that a Nature Preservation Willingness Index can...

  4. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  5. Optimal control applied to a thoraco-abdominal CPR model.

    Science.gov (United States)

    Jung, Eunok; Lenhart, Suzanne; Protopopescu, Vladimir; Babbs, Charles

    2008-06-01

    The techniques of optimal control are applied to a validated blood circulation model of cardiopulmonary resuscitation (CPR), consisting of a system of seven difference equations. In this system, the non-homogeneous forcing terms are chest and abdominal pressures acting as the 'controls'. We seek to maximize the blood flow, as measured by the pressure difference between the thoracic aorta and the right atrium. By applying optimal control methods, we characterize the optimal waveforms for external chest and abdominal compression during cardiac arrest and CPR in terms of the solutions of the circulation model and of the corresponding adjoint system. Numerical results are given for various scenarios. The optimal waveforms confirm the previously discovered positive effects of active decompression and interposed abdominal compression. These waveforms can be implemented with manual (Lifestick-like) and mechanical (vest-like) devices to achieve levels of blood flow substantially higher than those provided by standard CPR, a technique which, despite its long history, is far from optimal.

  6. Modeling for fairness: A Rawlsian approach.

    Science.gov (United States)

    Diekmann, Sven; Zwart, Sjoerd D

    2014-06-01

    In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.

  7. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.

    Science.gov (United States)

    Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets

  8. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598.

  9. A soil-plant model applied to phytoremediation of metals.

    Science.gov (United States)

    Lugli, Francesco; Mahler, Claudio Fernando

    2016-01-01

    This study reports a phytoremediation pot experiment using an open-source program. Unsaturated water flow was described by the Richards' equation and solute transport by the advection-dispersion equation. Sink terms in the governing flow and transport equations accounted for root water and solute uptake, respectively. Experimental data were related to application of Vetiver grass to soil contaminated by metal ions. Sensitivity analysis revealed that due to the specific experimental set-up (bottom flux not allowed), hydraulic model parameters did not influence root water (and contaminant) uptake. In contrast, the results were highly correlated with plant solar radiation interception efficiency (leaf area index). The amounts of metals accumulated in the plant tissue were compared to numerical values of cumulative uptake. Pb(2+) and Zn(2+) uptake was satisfactorily described using a passive model. However, for Ni(2+) and Cd(2+), a specific calibration of the active uptake model was necessary. Calibrated MM parameters for Ni(2+), Cd(2+), and Pb(2+) were compared to values in the literature, generally suggesting lower rates and saturation advance. A parameter (saturation ratio) was introduced to assess the efficiency of contaminant uptake. Numerical analysis, applying actual field conditions, showed the limitation of the active model for being independent of the transpiration rate.

  10. Structure-selection techniques applied to continuous-time nonlinear models

    Science.gov (United States)

    Aguirre, Luis A.; Freitas, Ubiratan S.; Letellier, Christophe; Maquet, Jean

    2001-10-01

    This paper addresses the problem of choosing the multinomials that should compose a polynomial mathematical model starting from data. The mathematical representation used is a nonlinear differential equation of the polynomial type. Some approaches that have been used in the context of discrete-time models are adapted and applied to continuous-time models. Two examples are included to illustrate the main ideas. Models obtained with and without structure selection are compared using topological analysis. The main differences between structure-selected models and complete structure models are: (i) the former are more parsimonious than the latter, (ii) a predefined fixed-point configuration can be guaranteed for the former, and (iii) the former set of models produce attractors that are topologically closer to the original attractor than those produced by the complete structure models.

  11. Consideration of an applied model of public health program infrastructure.

    Science.gov (United States)

    Lavinghouze, René; Snyder, Kimberly; Rieker, Patricia; Ottoson, Judith

    2013-01-01

    Systemic infrastructure is key to public health achievements. Individual public health program infrastructure feeds into this larger system. Although program infrastructure is rarely defined, it needs to be operationalized for effective implementation and evaluation. The Ecological Model of Infrastructure (EMI) is one approach to defining program infrastructure. The EMI consists of 5 core (Leadership, Partnerships, State Plans, Engaged Data, and Managed Resources) and 2 supporting (Strategic Understanding and Tactical Action) elements that are enveloped in a program's context. We conducted a literature search across public health programs to determine support for the EMI. Four of the core elements were consistently addressed, and the other EMI elements were intermittently addressed. The EMI provides an initial and partial model for understanding program infrastructure, but additional work is needed to identify evidence-based indicators of infrastructure elements that can be used to measure success and link infrastructure to public health outcomes, capacity, and sustainability.

  12. Applying the model of excellence in dental healthcare

    Directory of Open Access Journals (Sweden)

    Tekić Jasmina

    2015-01-01

    Full Text Available Introduction. Models of excellence are considered a practical tool in the field of management that should help a variety of organizations, including dental, to carry out the measurement of the quality of provided services, and so define their position in relation to excellence. The quality of healthcare implies the degree within which the system of healthcare and health services increases the likelihood of positive treatment outcome. Objective. The aim of the present study was to define a model of excellence in the field of dental healthcare (DHC in the Republic of Serbia and suggest the model of DHC whose services will have the characteristics of outstanding service in the dental practice. Methods. In this study a specially designed questionnaire was used for the assessment of the maturity level of applied management regarding quality in healthcare organizations of the Republic of Serbia. The questionnaire consists of 13 units and a total of 240 questions. Results. The results of the study were discussed involving four areas: (1 defining the main criteria and sub-criteria, (2 the elements of excellence of DHC in the Republic of Serbia, (3 the quality of DHC in the Republic of Serbia, and (4 defining the framework of the model of excellence for the DHC in the Republic of Serbia. The main criteria which defined the framework and implementation model of excellence in the field of DHC in Serbia were: leadership, management, human resources, policy and strategy, other resources, processes, patients’ satisfaction, employee’s satisfaction, impact on society and business results. The model had two main parts: the possibilities for the first five criteria and options for the other four criteria. Conclusion. Excellence in DHC business as well as the excellence of provided dental services are increasingly becoming the norm and good practice, and progressively less the exception.

  13. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  14. Applying the health action process approach to bicycle helmet use and evaluating a social marketing campaign.

    Science.gov (United States)

    Karl, Florian M; Smith, Jennifer; Piedt, Shannon; Turcotte, Kate; Pike, Ian

    2017-08-05

    Bicycle injuries are of concern in Canada. Since helmet use was mandated in 1996 in the province of British Columbia, Canada, use has increased and head injuries have decreased. Despite the law, many cyclists do not wear a helmet. Health action process approach (HAPA) model explains intention and behaviour with self-efficacy, risk perception, outcome expectancies and planning constructs. The present study examines the impact of a social marketing campaign on HAPA constructs in the context of bicycle helmet use. A questionnaire was administered to identify factors determining helmet use. Intention to obey the law, and perceived risk of being caught if not obeying the law were included as additional constructs. Path analysis was used to extract the strongest influences on intention and behaviour. The social marketing campaign was evaluated through t-test comparisons after propensity score matching and generalised linear modelling (GLM) were applied to adjust for the same covariates. 400 cyclists aged 25-54 years completed the questionnaire. Self-efficacy and Intention were most predictive of intention to wear a helmet, which, moderated by planning, strongly predicted behaviour. Perceived risk and outcome expectancies had no significant impact on intention. GLM showed that exposure to the campaign was significantly associated with higher values in self-efficacy, intention and bicycle helmet use. Self-efficacy and planning are important points of action for promoting helmet use. Social marketing campaigns that remind people of appropriate preventive action have an impact on behaviour. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Lessons learned for applying a paired-catchment approach in drought analysis

    Science.gov (United States)

    Van Loon, Anne; Rangecroft, Sally; Coxon, Gemma; Agustín Breña Naranjo, José; Van Ogtrop, Floris; Croghan, Danny; Van Lanen, Henny

    2017-04-01

    Ongoing research is looking to quantify the human impact on hydrological drought using observed data. One potentially suitable method is the paired-catchment approach. Paired catchments have been successfully used for quantifying the impact of human actions (e.g. forest treatment and wildfires) on various components of a catchment's water balance. However, it is unclear whether this method could successfully be applied to drought. In this study, we used a paired-catchment approach to quantify the effects of reservoirs, groundwater abstraction and urbanisation on hydrological drought in the UK, Mexico, and Australia. Following recommendations in literature, we undertook a thorough catchment selection and identified catchments of similar size, climate, geology, and topography. One catchment of the pair was affected by either reservoirs, groundwater abstraction or urbanisation. For the selected catchment pairs, we standardised streamflow time series to catchment area, calculated a drought threshold from the natural catchment and applied it to the human-influenced catchment. The underlying assumption being that the differences in drought severity between catchments can then be attributed to the anthropogenic activity. In some catchments we had local knowledge about human influences, and therefore we could compare our paired-catchment results with hydrological model scenarios. However, we experienced that detailed data on human influences usually are not well recorded. The results showed us that it is important to account for variation in average annual precipitation between the paired catchments to be able to transfer the drought threshold of the natural catchment to the human-influenced catchment. This can be achieved by scaling the discharge by the difference in annual average precipitation. We also found that the temporal distribution of precipitation is important, because if meteorological droughts differ between the paired catchments, this may mask changes caused

  16. Improving Credit Scorecard Modeling Through Applying Text Analysis

    Directory of Open Access Journals (Sweden)

    Omar Ghailan

    2016-04-01

    Full Text Available In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques.

  17. Asteroid modeling for testing spacecraft approach and landing.

    Science.gov (United States)

    Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick

    2014-01-01

    Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.

  18. Finite element modeling and analysis of piezo-integrated composite structures under large applied electric fields

    Science.gov (United States)

    Rao, M. N.; Tarun, S.; Schmidt, R.; Schröder, K.-U.

    2016-05-01

    In this article, we focus on static finite element (FE) simulation of piezoelectric laminated composite plates and shells, considering the nonlinear constitutive behavior of piezoelectric materials under large applied electric fields. Under the assumptions of small strains and large electric fields, the second-order nonlinear constitutive equations are used in the variational principle approach, to develop a nonlinear FE model. Numerical simulations are performed to study the effect of material nonlinearity for piezoelectric bimorph and laminated composite plates as well as cylindrical shells. In comparison to the experimental investigations existing in the literature, the results predicted by the present model agree very well. The importance of the present nonlinear model is highlighted especially in large applied electric fields, and it is shown that the difference between the results simulated by linear and nonlinear constitutive FE models cannot be omitted.

  19. The bi-potential method applied to the modeling of dynamic problems with friction

    Science.gov (United States)

    Feng, Z.-Q.; Joli, P.; Cros, J.-M.; Magnain, B.

    2005-10-01

    The bi-potential method has been successfully applied to the modeling of frictional contact problems in static cases. This paper presents an extension of this method for dynamic analysis of impact problems with deformable bodies. A first order algorithm is applied to the numerical integration of the time-discretized equation of motion. Using the Object-Oriented Programming (OOP) techniques in C++ and OpenGL graphical support, a finite element code including pre/postprocessor FER/Impact is developed. The numerical results show that, at the present stage of development, this approach is robust and efficient in terms of numerical stability and precision compared with the penalty method.

  20. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  1. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    Science.gov (United States)

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-08-01

    In this work, we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, E0, which controls the size of these clusters, such that E0=1 is the Metropolis algorithm and E0=0 regains the Wolff algorithm, for the Potts model. For -1clusters of equal spins can be formed: we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size L˜, which depends on E0. For L≥L˜, the Niedermayer algorithm is in the same dynamic universality class of the Metropolis one, i.e, they have the same dynamic exponent. For E0>0, spins in different states may be added to the cluster but the dynamic behavior is less efficient than for the Wolff algorithm (E0=0). Therefore, our results show that the Wolff algorithm is the best choice for Potts models, when compared to the Niedermayer's generalization.

  2. Spectral Aging Model Applied to Meteosat First Generation Visible Band

    Directory of Open Access Journals (Sweden)

    Ilse Decoster

    2014-03-01

    Full Text Available The Meteosat satellites have been operational since the early eighties, creating so far a continuous time period of observations of more than 30 years. In order to use this data for climate data records, a consistent calibration is necessary between the consecutive instruments. Studies have shown that the Meteosat First Generation (MFG satellites (1982–2006 suffer from in-flight degradation which is spectral of nature and is not corrected by the official calibration of EUMETSAT. Continuing on previous published work by the same authors, this paper applies the spectral aging model to a set of clear-sky and cloudy targets, and derives the model parameters for all six MFG satellites (Meteosat-2 to -7. Several problems have been encountered, both due to the instrument and due to geophysical occurrences, and these are discussed and illustrated here in detail. The paper shows how the spectral aging model is an improvement compared to the EUMETSAT calibration method with a stability of 1%–2% for Meteosat-4 to -7, which increases up to 6% for ocean sites using the full MFG time period.

  3. Multidisciplinary Management: Model of Excellence in the Management Applied to Products and Services

    OpenAIRE

    Guerreiro, Evandro,; Costa Neto, Pedro,; Moreira Filho, Ulysses,

    2014-01-01

    Part 1: Knowledge-Based Performance Improvement; International audience; The Multidisciplinary Management is the guiding vision of modern organizations and the systems thinking which requires new approaches to organizational excellence and quality management process. The objective of this article is to present a model for multidisciplinary management of quality applied to products and services based on American, Japanese, and Brazilian National Quality Awards. The methodology used to build th...

  4. Linear model applied to the evaluation of pharmaceutical stability data

    Directory of Open Access Journals (Sweden)

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  5. Process Approach to Teaching Writing Applied in Different Teaching Models

    Science.gov (United States)

    Sun, Chunling; Feng, Guoping

    2009-01-01

    English writing, as a basic language skill for second language learners, is being paid close attention to. How to achieve better results in English teaching and how to develop students' writing competence remain an arduous task for English teachers. Based on the review of the concerning literature from other researchers as well as a summery of the…

  6. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Science.gov (United States)

    Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.

    2010-06-01

    A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test) which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel) and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD) of the full fields in order to drastically reduce their dimensionality. POD is

  7. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Le Riche R.

    2010-06-01

    Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their

  8. Transient heat conduction in a pebble fuel applying fractional model

    Energy Technology Data Exchange (ETDEWEB)

    Gomez A, R.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)], e-mail: gepe@xanum.uam.mx

    2009-10-15

    In this paper we presents the equation of thermal diffusion of temporary-fractional order in the one-dimensional space in spherical coordinates, with the objective to analyze the heat transference between the fuel and coolant in a fuel element of a Pebble Bed Modular Reactor. The pebble fuel is the heterogeneous system made by microsphere constitutes by U O, pyrolytic carbon and silicon carbide mixed with graphite. To describe the heat transfer phenomena in the pebble fuel we applied a constitutive law fractional (Non-Fourier) in order to analyze the behaviour transient of the temperature distribution in the pebble fuel with anomalous thermal diffusion effects a numerical model is developed. (Author)

  9. Generating and Analysing Data for Applied Research on Emerging Technologies: A Grounded Action Learning Approach

    Directory of Open Access Journals (Sweden)

    Pak Yoong

    2004-01-01

    Full Text Available One of the difficulties of conducting applied qualitative research on the applications of emerging technologies is finding available sources of relevant data for analysis. Because the adoption of emerging technologies is, by definition, new in many organizations, there is often a lack of experienced practitioners who have relevant background and are willing to provide useful information for the study. Therefore, it is necessary to design research approaches that can generate accessible and relevant data. This paper describes two case studies in which the researchers used a grounded action learning approach to study the nature of e-facilitation for face-to-face and for distributed electronic meetings. The grounded action learning approach combines two research methodologies, grounded theory and action learning, to produce a rigorous and flexible method for studying e-facilitation. The implications of this grounded action learning approach for practice and research will be discussed.

  10. Uncharted territory: A complex systems approach as an emerging paradigm in applied linguistics

    Directory of Open Access Journals (Sweden)

    Albert J Weideman

    2011-08-01

    Full Text Available Developing a theory of applied linguistics is a top priority for the discipline today. The emergence of a new paradigm - a complex systems approach - in applied linguistics presents us with a unique opportunity to give prominence to the development of a foundational framework for this design discipline. Far from being a mere philosophical exercise, such a framework will find application in the training and induction of new entrants into the discipline within the developing context of South Africa, as well as internationally.

  11. Correction of approximation errors with Random Forests applied to modelling of aerosol first indirect effect

    Directory of Open Access Journals (Sweden)

    A. Lipponen

    2013-04-01

    Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.

  12. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    Science.gov (United States)

    Nordstrom, D. Kirk

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  13. Model Mapping Approach Based on Ontology Semantics

    Directory of Open Access Journals (Sweden)

    Jinkui Hou

    2013-09-01

    Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development

  14. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  15. Simple Queueing Model Applied to the City of Portland

    Science.gov (United States)

    Simon, Patrice M.; Esser, Jörg; Nagel, Kai

    We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro

  16. Effects produced by oscillations applied to nonlinear dynamic systems: a general approach and examples

    DEFF Research Database (Denmark)

    Blekhman, I. I.; Sorokin, V. S.

    2016-01-01

    A general approach to study effects produced by oscillations applied to nonlinear dynamic systems is developed. It implies a transition from initial governing equations of motion to much more simple equations describing only the main slow component of motions (the vibro-transformed dynamics...... equations). The approach is named as the oscillatory strobodynamics, since motions are perceived as under a stroboscopic light. The vibro-transformed dynamics equations comprise terms that capture the averaged effect of oscillations. The method of direct separation of motions appears to be an efficient...

  17. A theoretical intellectual capital model applied to cities

    Directory of Open Access Journals (Sweden)

    José Luis Alfaro Navarro

    2013-06-01

    Full Text Available New Management Information Systems (MIS are necessary at local level as the main source of wealth creation. Therefore, tools and approaches that provide a full future vision of any organization should be a strategic priority for economic development. In this line, cities are “centers of knowledge and sources of growth and innovation” and integrated urban development policies are necessary. These policies support communication networks and optimize location structures as strategies that provide opportunities for social and democratic participation for the citizens. This paper proposes a theoretical model to measure and evaluate the cities intellectual capital that allows determine what we must take into account to make cities a source of wealth, prosperity, welfare and future growth. Furthermore, local intellectual capital provides a long run vision. Thus, in this paper we develop and explain how to implement a model to estimate intellectual capital in cities. In this sense, our proposal is to provide a model for measuring and managing intellectual capital using socio-economic indicators for cities. These indicators offer a long term picture supported by a comprehensive strategy for those who occupy the local space, infrastructure for implementation and management of the environment for its development.

  18. Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users

    DEFF Research Database (Denmark)

    Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane

    2011-01-01

    a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study....... To measure the emotional responses of the elderly, a questionnaire was designed and adapted from P.M.A. Desmet’s product-emotion measurement instrument: PrEmo. During the case study it was observed that there were several challenges when carrying out the user survey, and that those challenges particularly...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....

  19. Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users

    DEFF Research Database (Denmark)

    Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane

    2011-01-01

    Population ageing is without parallel in human history and the twenty-first century will witness even more rapid ageing than did the century just past. Understanding the user needs of the elderly and how to design better products for this segment of the population is crucial, as it can offer...... a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....

  20. Social media marketing analytics : a multicultural approach applied to the beauty \\& cosmetic sector

    OpenAIRE

    Kefi, Hajer; Indra, Sitesh; Abdessalem, Talel

    2017-01-01

    We present in this paper a multicultural approach to social media marketing analytics, applied in two Facebook brand pages: French (individualistic culture, the country home of the brand) versus Saudi Arabian (collectivistic culture, one of its country hosts), which are published by an internationalbeauty \\& cosmetics firm. Using social network analysis and content analysis, we identify the most popular posts and the most influential users within these two brand pages and highlight the differ...

  1. Applying the welfare model to at-own-risk discharges.

    Science.gov (United States)

    Krishna, Lalit Kumar Radha; Menon, Sumytra; Kanesvaran, Ravindran

    2017-08-01

    "At-own-risk discharges" or "self-discharges" evidences an irretrievable breakdown in the patient-clinician relationship when patients leave care facilities before completion of medical treatment and against medical advice. Dissolution of the therapeutic relationship terminates the physician's duty of care and professional liability with respect to care of the patient. Acquiescence of an at-own-risk discharge by the clinician is seen as respecting patient autonomy. The validity of such requests pivot on the assumptions that the patient is fully informed and competent to invoke an at-own-risk discharge and that care up to the point of the at-own-risk discharge meets prevailing clinical standards. Palliative care's use of a multidisciplinary team approach challenges both these assumptions. First by establishing multiple independent therapeutic relations between professionals in the multidisciplinary team and the patient who persists despite an at-own-risk discharge. These enduring therapeutic relationships negate the suggestion that no duty of care is owed the patient. Second, the continued employ of collusion, familial determinations, and the circumnavigation of direct patient involvement in family-centric societies compromises the patient's decision-making capacity and raises questions as to the patient's decision-making capacity and their ability to assume responsibility for the repercussions of invoking an at-own-risk discharge. With the validity of at-own-risk discharge request in question and the welfare and patient interest at stake, an alternative approach to assessing at-own-risk discharge requests are called for. The welfare model circumnavigates these concerns and preserves the patient's welfare through the employ of a multidisciplinary team guided holistic appraisal of the patient's specific situation that is informed by clinical and institutional standards and evidenced-based practice. The welfare model provides a robust decision-making framework for

  2. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  3. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    Science.gov (United States)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  4. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nakae, Nobuo, E-mail: nakae-nobuo@jnes.go.jp [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan); Ozawa, Takayuki [Advanced Nuclear System Research and Development Directorate, Japan Atomic Energy Agency, 4-33, Muramatsu, Tokai-mura, Ibaraki-ken 319-1194 (Japan); Ohta, Hirokazu; Ogata, Takanari [Nuclear Technology Research Laboratory, Central Research Institute of Electric Power Industry, 2-11-1, Iwado Kita, Komae-shi, Tokyo 201-8511 (Japan); Sekimoto, Hiroshi [Center for Research into Innovative Nuclear Energy System, Tokyo Institute of Technology, 2-12-1-N1-19, Ookayama, Meguro-ku, Tokyo 152-8550 (Japan)

    2014-03-15

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria. Metal and nitride fuels with austenitic and ferritic stainless steel (SS) cladding tubes were examined in this study. For the purpose of representative irradiation behavior analyses of the fuel for Innovative Nuclear Energy Systems, the correlations of the cladding characteristics were modeled based on well-known characteristics of austenitic modified 316 SS (PNC316), ferritic–martensitic steel (PNC–FMS) and oxide dispersion strengthened steel (PNC–ODS). The analysis showed that the fuel lifetime is limited by channel fracture which is a nonductile type (brittle) failure associated with a high level of irradiation-induced swelling in the case of austenitic steel cladding. In case of ferritic steel, on the other hand, the fuel lifetime is controlled by cladding creep rupture. The lifetime evaluated here is limited to 200 GW d/t, which is lower than the target burnup value of 500 GW d/t. One of the possible measures to extend the lifetime may be reducing the fuel smeared density and ventilating fission gas in the plenum for metal fuel and by reducing the maximum cladding temperature from 650 to 600 °C for both metal and nitride fuel.

  5. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  6. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... mink only showed habituation in experiment 2. Regardless of the frequency used (2 and 18 kHz), cues predicting the danger situation initially elicited slower responses compared to those predicting the safe situation but quickly became faster. Using auditory cues as discrimination stimuli for female...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  7. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...

  8. Model based feature fusion approach

    NARCIS (Netherlands)

    Schwering, P.B.W.

    2001-01-01

    In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si

  9. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertainties...

  10. APPLYING A JUST-IN-TIME INTEGRATED SUPPLY CHAIN MODEL WITH INVENTORY AND WASTE REDUCTION CONSIDERATIONS

    Directory of Open Access Journals (Sweden)

    Li-Hsing Ho

    2013-01-01

    Full Text Available Just-In-Time (JIT has been playing an important role in supply chain environments. Countless firms have been applying JIT in production to gain and maintain a competitive advantage. This study introduces an innovative model which integrates inventory and quality assurance in a JIT supply chain. This approach assumes that manufacturing will produce some defective items and those products will not influence the buyer’s purchase policy. The vendor absorbs all the inspection costs. Using a function to compute the expected amount of total cost every year will minimize the total cost and the nonconforming fraction. Finally, a numerical example further confirms this model.

  11. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  12. BCS-Hubbard model applied to anisotropic superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Millan, J.S., E-mail: smillan@pampano.unacar.mx [Facultad de Ingenieria, Universidad Autonoma del Carmen, Cd. del Carmen, 24180 Campeche (Mexico); Perez, L.A. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico, A.P. 20-364, 01000, Mexico D.F. (Mexico); Wang, C. [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico, A.P. 70-360, 04510, Mexico D.F. (Mexico)

    2011-11-15

    The BCS formalism applied to a Hubbard model, including correlated hoppings, is used to study d-wave superconductors. The theoretical T{sub c} vs. n relationship is compared with experimental data from BiSr{sub 2-x}La{sub x}CuO{sub 6+{delta}} and La{sub 2-x}Sr{sub x}CuO{sub 4}. The results suggest a nontrivial correlation between the hole and the doping concentrations. Based on the BCS formalism, we study the critical temperature (T{sub c}) as a function of electron density (n) in a square lattice by means of a generalized Hubbard model, in which first ({Delta}t) and second neighbors ({Delta}t{sub 3}) correlated-hopping interactions are included in addition to the repulsive Coulomb ones. We compare the theoretical T{sub c} vs. n relationship with experimental data of cuprate superconductors BiSr{sub 2-x}La{sub x}CuO{sub 6+{delta}} (BSCO) and La{sub 2-x}Sr{sub x}CuO{sub 4}, (LSCO). The theory agrees very well with BSCO data even though the complicated association between Sr concentration (x) and hole doping (p). For the LSCO system, it is observed that in the underdoped regime, the T{sub c} vs. n behavior can be associated to different systems with small variations of t'. For the overdoped regime, a more complicated dependence n = 1 - p/2 fits better than n = 1 - p. On the other hand, it is proposed that the second neighbor hopping ratio (t'/t) should be replaced by the effective mean field hopping ratio t{sub MF}{sup '}/t{sub MF}, which can be very sensitive to small changes of t' due to the doping.

  13. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  14. A POMDP approach to Affective Dialogue Modeling

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.

    2007-01-01

    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's

  15. The chronic diseases modelling approach

    NARCIS (Netherlands)

    Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM

    1998-01-01

    A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s

  16. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  17. Reynolds stress turbulence model applied to two-phase pressurized thermal shocks in nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Mérigoux, Nicolas, E-mail: nicolas.merigoux@edf.fr; Laviéville, Jérôme; Mimouni, Stéphane; Guingo, Mathieu; Baudry, Cyril

    2016-04-01

    Highlights: • NEPTUNE-CFD is used to model two-phase PTS. • k-ε model did produce some satisfactory results but also highlights some weaknesses. • A more advanced turbulence model has been developed, validated and applied for PTS. • Coupled with LIM, the first results confirmed the increased accuracy of the approach. - Abstract: Nuclear power plants are subjected to a variety of ageing mechanisms and, at the same time, exposed to potential pressurized thermal shock (PTS) – characterized by a rapid cooling of the internal Reactor Pressure Vessel (RPV) surface. In this context, NEPTUNE-CFD is used to model two-phase PTS and give an assessment on the structural integrity of the RPV. The first available choice was to use standard first order turbulence model (k-ε) to model high-Reynolds number flows encountered in Pressurized Water Reactor (PWR) primary circuits. In a first attempt, the use of k-ε model did produce some satisfactory results in terms of condensation rate and temperature field distribution on integral experiments, but also highlights some weaknesses in the way to model highly anisotropic turbulence. One way to improve the turbulence prediction – and consequently the temperature field distribution – is to opt for more advanced Reynolds Stress turbulence Model. After various verification and validation steps on separated effects cases – co-current air/steam-water stratified flows in rectangular channels, water jet impingements on water pool free surfaces – this Reynolds Stress turbulence Model (R{sub ij}-ε SSG) has been applied for the first time to thermal free surface flows under industrial conditions on COSI and TOPFLOW-PTS experiments. Coupled with the Large Interface Model, the first results confirmed the adequacy and increased accuracy of the approach in an industrial context.

  18. International Conference on Applied Mathematics, Modeling and Computational Science & Annual meeting of the Canadian Applied and Industrial Mathematics

    CERN Document Server

    Bélair, Jacques; Kunze, Herb; Makarov, Roman; Melnik, Roderick; Spiteri, Raymond J

    2016-01-01

    Focusing on five main groups of interdisciplinary problems, this book covers a wide range of topics in mathematical modeling, computational science and applied mathematics. It presents a wealth of new results in the development of modeling theories and methods, advancing diverse areas of applications and promoting interdisciplinary interactions between mathematicians, scientists, engineers and representatives from other disciplines. The book offers a valuable source of methods, ideas, and tools developed for a variety of disciplines, including the natural and social sciences, medicine, engineering, and technology. Original results are presented on both the fundamental and applied level, accompanied by an ample number of real-world problems and examples emphasizing the interdisciplinary nature and universality of mathematical modeling, and providing an excellent outline of today’s challenges. Mathematical modeling, with applied and computational methods and tools, plays a fundamental role in modern science a...

  19. The possibilities of applying a risk-oriented approach to the NPP reliability and safety enhancement problem

    Science.gov (United States)

    Komarov, Yu. A.

    2014-10-01

    An analysis and some generalizations of approaches to risk assessments are presented. Interconnection between different interpretations of the "risk" notion is shown, and the possibility of applying the fuzzy set theory to risk assessments is demonstrated. A generalized formulation of the risk assessment notion is proposed in applying risk-oriented approaches to the problem of enhancing reliability and safety in nuclear power engineering. The solution of problems using the developed risk-oriented approaches aimed at achieving more reliable and safe operation of NPPs is described. The results of studies aimed at determining the need (advisability) to modernize/replace NPP elements and systems are presented together with the results obtained from elaborating the methodical principles of introducing the repair concept based on the equipment technical state. The possibility of reducing the scope of tests and altering the NPP systems maintenance strategy is substantiated using the risk-oriented approach. A probabilistic model for estimating the validity of boric acid concentration measurements is developed.

  20. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...

  1. Dynamic model reduction using data-driven Loewner-framework applied to thermally morphing structures

    Science.gov (United States)

    Phoenix, Austin A.; Tarazaga, Pablo A.

    2017-05-01

    The work herein proposes the use of the data-driven Loewner-framework for reduced order modeling as applied to dynamic Finite Element Models (FEM) of thermally morphing structures. The Loewner-based modeling approach is computationally efficient and accurately constructs reduced models using analytical output data from a FEM. This paper details the two-step process proposed in the Loewner approach. First, a random vibration FEM simulation is used as the input for the development of a Single Input Single Output (SISO) data-based dynamic Loewner state space model. Second, an SVD-based truncation is used on the Loewner state space model, such that the minimal, dynamically representative, state space model is achieved. For this second part, varying levels of reduction are generated and compared. The work herein can be extended to model generation using experimental measurements by replacing the FEM output data in the first step and following the same procedure. This method will be demonstrated on two thermally morphing structures, a rigidly fixed hexapod in multiple geometric configurations and a low mass anisotropic morphing boom. This paper is working to detail the method and identify the benefits of the reduced model methodology.

  2. The promises and pitfalls of applying computational models to neurological and psychiatric disorders.

    Science.gov (United States)

    Teufel, Christoph; Fletcher, Paul C

    2016-10-01

    Computational models have become an integral part of basic neuroscience and have facilitated some of the major advances in the field. More recently, such models have also been applied to the understanding of disruptions in brain function. In this review, using examples and a simple analogy, we discuss the potential for computational models to inform our understanding of brain function and dysfunction. We argue that they may provide, in unprecedented detail, an understanding of the neurobiological and mental basis of brain disorders and that such insights will be key to progress in diagnosis and treatment. However, there are also potential problems attending this approach. We highlight these and identify simple principles that should always govern the use of computational models in clinical neuroscience, noting especially the importance of a clear specification of a model's purpose and of the mapping between mathematical concepts and reality.

  3. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  4. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  5. Bayesian GGE biplot models applied to maize multi-environments trials.

    Science.gov (United States)

    de Oliveira, L A; da Silva, C P; Nuvunga, J J; da Silva, A Q; Balestre, M

    2016-06-17

    The additive main effects and multiplicative interaction (AMMI) and the genotype main effects and genotype x environment interaction (GGE) models stand out among the linear-bilinear models used in genotype x environment interaction studies. Despite the advantages of their use to describe genotype x environment (AMMI) or genotype and genotype x environment (GGE) interactions, these methods have known limitations that are inherent to fixed effects models, including difficulty in treating variance heterogeneity and missing data. Traditional biplots include no measure of uncertainty regarding the principal components. The present study aimed to apply the Bayesian approach to GGE biplot models and assess the implications for selecting stable and adapted genotypes. Our results demonstrated that the Bayesian approach applied to GGE models with non-informative priors was consistent with the traditional GGE biplot analysis, although the credible region incorporated into the biplot enabled distinguishing, based on probability, the performance of genotypes, and their relationships with the environments in the biplot. Those regions also enabled the identification of groups of genotypes and environments with similar effects in terms of adaptability and stability. The relative position of genotypes and environments in biplots is highly affected by the experimental accuracy. Thus, incorporation of uncertainty in biplots is a key tool for breeders to make decisions regarding stability selection and adaptability and the definition of mega-environments.

  6. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  7. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  8. A multicriteria decision making approach applied to improving maintenance policies in healthcare organizations.

    Science.gov (United States)

    Carnero, María Carmen; Gómez, Andrés

    2016-04-23

    Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a

  9. Quantitative Systems Pharmacology Approaches Applied to Microphysiological Systems (MPS): Data Interpretation and Multi-MPS Integration.

    Science.gov (United States)

    Yu, J; Cilfone, N A; Large, E M; Sarkar, U; Wishnok, J S; Tannenbaum, S R; Hughes, D J; Lauffenburger, D A; Griffith, L G; Stokes, C L; Cirit, M

    2015-10-01

    Our goal in developing Microphysiological Systems (MPS) technology is to provide an improved approach for more predictive preclinical drug discovery via a highly integrated experimental/computational paradigm. Success will require quantitative characterization of MPSs and mechanistic analysis of experimental findings sufficient to translate resulting insights from in vitro to in vivo. We describe herein a systems pharmacology approach to MPS development and utilization that incorporates more mechanistic detail than traditional pharmacokinetic/pharmacodynamic (PK/PD) models. A series of studies illustrates diverse facets of our approach. First, we demonstrate two case studies: a PK data analysis and an inflammation response--focused on a single MPS, the liver/immune MPS. Building on the single MPS modeling, a theoretical investigation of a four-MPS interactome then provides a quantitative way to consider several pharmacological concepts such as absorption, distribution, metabolism, and excretion in the design of multi-MPS interactome operation and experiments.

  10. Creating patient value in glaucoma care : applying quality costing and care delivery value chain approaches

    NARCIS (Netherlands)

    D.F. de Korne (Dirk); J.C.A. Sol (Kees); T. Custers (Thomas); E. van Sprundel (Esther); B.M. van Ineveld (Martin); H.G. Lemij (Hans); N.S. Klazinga (Niek)

    2009-01-01

    textabstractPurpose: The purpose of this paper is to explore in a specific hospital care process the applicability in practice of the theories of quality costing and value chains. Design/methodology/approach: In a retrospective case study an in-depth evaluation of the use of a quality cost model (QC

  11. An Approach Based on Social Network Analysis Applied to a Collaborative Learning Experience

    Science.gov (United States)

    Claros, Iván; Cobos, Ruth; Collazos, César A.

    2016-01-01

    The Social Network Analysis (SNA) techniques allow modelling and analysing the interaction among individuals based on their attributes and relationships. This approach has been used by several researchers in order to measure the social processes in collaborative learning experiences. But oftentimes such measures were calculated at the final state…

  12. Creating patient value in glaucoma care : applying quality costing and care delivery value chain approaches

    NARCIS (Netherlands)

    D.F. de Korne (Dirk); J.C.A. Sol (Kees); T. Custers (Thomas); E. van Sprundel (Esther); B.M. van Ineveld (Martin); H.G. Lemij (Hans); N.S. Klazinga (Niek)

    2009-01-01

    textabstractPurpose: The purpose of this paper is to explore in a specific hospital care process the applicability in practice of the theories of quality costing and value chains. Design/methodology/approach: In a retrospective case study an in-depth evaluation of the use of a quality cost model

  13. Bayesian flux balance analysis applied to a skeletal muscle metabolic model.

    Science.gov (United States)

    Heino, Jenni; Tunyan, Knarik; Calvetti, Daniela; Somersalo, Erkki

    2007-09-01

    In this article, the steady state condition for the multi-compartment models for cellular metabolism is considered. The problem is to estimate the reaction and transport fluxes, as well as the concentrations in venous blood when the stoichiometry and bound constraints for the fluxes and the concentrations are given. The problem has been addressed previously by a number of authors, and optimization-based approaches as well as extreme pathway analysis have been proposed. These approaches are briefly discussed here. The main emphasis of this work is a Bayesian statistical approach to the flux balance analysis (FBA). We show how the bound constraints and optimality conditions such as maximizing the oxidative phosphorylation flux can be incorporated into the model in the Bayesian framework by proper construction of the prior densities. We propose an effective Markov chain Monte Carlo (MCMC) scheme to explore the posterior densities, and compare the results with those obtained via the previously studied linear programming (LP) approach. The proposed methodology, which is applied here to a two-compartment model for skeletal muscle metabolism, can be extended to more complex models.

  14. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  15. Learning About Dying and Living: An Applied Approach to End-of-Life Communication.

    Science.gov (United States)

    Pagano, Michael P

    2016-08-01

    The purpose of this article is to expand on prior research in end-of-life communication and death and dying communication apprehension, by developing a unique course that utilizes a hospice setting and an applied, service-learning approach. Therefore, this essay describes and discusses both students' and my experiences over a 7-year period from 2008 through 2014. The courses taught during this time frame provided an opportunity to analyze students' responses, experiences, and discoveries across semesters/years and cocultures. This unique, 3-credit, 14-week, service-learning, end-of-life communication course was developed to provide an opportunity for students to learn the theories related to this field of study and to apply that knowledge through volunteer experiences via interactions with dying patients and their families. The 7 years of author's notes, plus the 91 students' electronically submitted three reflection essays each (273 total documents) across four courses/years, served as the data for this study. According to the students, verbally in class discussions and in numerous writing assignments, this course helped lower their death and dying communication apprehension and increased their willingness to interact with hospice patients and their families. Furthermore, the students' final research papers clearly demonstrated how utilizing a service-learning approach allowed them to apply classroom learnings and interactions with dying patients and their families at the hospice, to their analyses of end-of-life communication theories and behaviors. The results of these classes suggest that other, difficult topic courses (e.g., domestic violence, addiction, etc.) might benefit from a similar pedagogical approach.

  16. Bayesian model selection applied to artificial neural networks used for water resources modeling

    Science.gov (United States)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  17. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    , while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators......One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class...... through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM) are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens...

  18. Applying nonlinear MODM model to supply chain management with quantity discount policy under complex fuzzy environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2014-06-01

    Full Text Available Purpose: The aim of this paper is to deal with the supply chain management (SCM with quantity discount policy under the complex fuzzy environment, which is characterized as the bi-fuzzy variables. By taking into account the strategy and the process of decision making, a bi-fuzzy nonlinear multiple objective decision making (MODM model is presented to solve the proposed problem.Design/methodology/approach: The bi-fuzzy variables in the MODM model are transformed into the trapezoidal fuzzy variables by the DMs's degree of optimism ?1 and ?2, which are de-fuzzified by the expected value index subsequently. For solving the complex nonlinear model, a multi-objective adaptive particle swarm optimization algorithm (MO-APSO is designed as the solution method.Findings: The proposed model and algorithm are applied to a typical example of SCM problem to illustrate the effectiveness. Based on the sensitivity analysis of the results, the bi-fuzzy nonlinear MODM SCM model is proved to be sensitive to the possibility level ?1.Practical implications: The study focuses on the SCM under complex fuzzy environment in SCM, which has a great practical significance. Therefore, the bi-fuzzy MODM model and MO-APSO can be further applied in SCM problem with quantity discount policy.Originality/value: The bi-fuzzy variable is employed in the nonlinear MODM model of SCM to characterize the hybrid uncertain environment, and this work is original. In addition, the hybrid crisp approach is proposed to transferred to model to an equivalent crisp one by the DMs's degree of optimism and the expected value index. Since the MODM model consider the bi-fuzzy environment and quantity discount policy, so this paper has a great practical significance.

  19. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    Science.gov (United States)

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the

  20. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  1. A multidating approach applied to historical slackwater flood deposits of the Gardon River, SE France

    Science.gov (United States)

    Dezileau, L.; Terrier, B.; Berger, J. F.; Blanchemanche, P.; Latapie, A.; Freydier, R.; Bremond, L.; Paquier, A.; Lang, M.; Delgado, J. L.

    2014-06-01

    charcoal and seeds is explained by the nature of the dated material (permanent wood vs. annual production and resistance to degradation process). Finally, we showed in this study that although the most common dating technique used in paleoflood hydrology is radiocarbon dating, usually on charcoal preserved within slackwater flood sediments, this method did not permit us to define a coherent age model. Only the combined use of lead-210, caesium-137, and geochemical analysis of mining-contaminated sediments with the instrumental flood record can be applied to discriminate and date the recent slackwater deposits of the terrace GE and cave GG.

  2. A new glacier model resolving ice dynamics applied to the Alps

    Science.gov (United States)

    Maussion, Fabien; Marzeion, Ben

    2016-04-01

    Most regional and global glacier models rely on empirical scaling laws to account for glacier area and volume change with time. These scaling methods are computationally cheap and are statistically robust when applied to many glaciers, but their accuracy considerably lowers at the glacier or catchment scale. The nearest alternative in terms of complexity - glacier flowline modelling - requires significantly more information about the glacier geometry. Here we present a new open source glacier model applicable at regional to global scale implementing i) the determination of glacier centerlines, ii) the inversion of glacier bed topography, and iii) a multi-branch flowline model handling glacier tributaries. Using the HISTALP dataset as climatological input we apply the model in the Alps for 1800 to present and present new estimations of present-day and past glacier volume. The relatively large number of independent data available for validation in this region allow a critical discussion of the added value of our new approach. In particular, we will focus our discussion on two contradictory aspects inherent to any geoscientific model development: while our model clearly opens wide-ranging possibilities to better resolve the glacier processes, this new playground is associated with an increase in complexity, the number of calibration parameters, and…uncertainty?

  3. Pilot-testing an applied competency-based approach to health human resources planning.

    Science.gov (United States)

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Rob; Langley, Joanne; Hickey, Marjorie; Cook, Amanda

    2013-10-01

    A competency-based approach to health human resources (HHR) planning is one that explicitly considers the spectrum of knowledge, skills and judgement (competencies) required for the health workforce based on the health needs of the relevant population in some specific circumstances. Such an approach is of particular benefit to planners challenged to make optimal use of limited HHR as it allows them to move beyond simply estimating numbers of certain professionals required and plan instead according to the unique mix of competencies available from the existing health workforce. This kind of flexibility is particularly valuable in contexts where healthcare providers are in short supply generally (e.g. in many developing countries) or temporarily due to a surge in need (e.g. a pandemic or other disease outbreak). A pilot application of this approach using the context of an influenza pandemic in one health district of Nova Scotia, Canada, is described, and key competency gaps identified. The approach is also being applied using other conditions in other Canadian jurisdictions and in Zambia.

  4. Szekeres models: a covariant approach

    CERN Document Server

    Apostolopoulos, Pantelis S

    2016-01-01

    We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  5. Matrix Model Approach to Cosmology

    CERN Document Server

    Chaney, A; Stern, A

    2015-01-01

    We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...

  6. Cosmological Model A New Approach

    Directory of Open Access Journals (Sweden)

    Francisco Martnez Flores

    2015-08-01

    Full Text Available ABSTRACT It is shown making use of Special Relativity and applying Doppler Effect thatthe motion of galaxies is not radial but transversal. Linking relativistic energy with Doppler Effect we may explain that the Cosmic Background Radiation is produced by a sufficientely large number of distant galaxies located in accordance with the requirement of homogeneity and isotropy of the Universe. The existence of dark matter can be understood by distinguishing between a real or inertial mass responsible for newtonian Mechanics and Gravitation and a virtual and electromagnetic relativistic mass which it is acceptable by Quantum Theory. The so-called black holes and cosmic scale factor are not following from a correct interpretation through the Schwarzschild and Robertson-Walker metrics respectively which together with the inability to quantize Gravitation introduce more than reasonable doubts about the reliability of the General Theory. The Universe does not expand butis in a steady state which can only be explained in the context of Quantum Theory.

  7. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  8. SP12 Biological Pathway-Centric Approach to Integrative Analysis of Array Data as Applied to Mefloquine Neurotoxicity

    Science.gov (United States)

    Jenkins, J.

    2007-01-01

    Expression profiling of whole genomes, and modern high-throughput proteomics, has created a revolution in the study of disease states. Approaches for gene expression analysis (time series analysis and clustering) have been applied to functional genomics related to cancer research, and have yielded major successes in the pursuit of gene expression signatures. However, these analysis methods are primarily designed to identify correlative or causal relationships between entities, but do not consider the data in the proper biological context of a “biological pathway” model. Pathway models form a cornerstone of systems biology. They provide a framework for (1) systematic interrogation of biochemical interactions, (2) management of the collective knowledge pertaining to cellular components, and (3) discovery of emergent properties of different pathway configurations. CFD Research Corporation has developed advanced techniques to interpret microarray data in the context of known biological pathways. We have applied this integrative biological pathway-centered approach to the specific problem of identifying a genetic cause for individuals predisposed to mefloquine neurotoxicity. Mefloquine (Lariam) is highly effective against drug-resistant malaria. However, adverse neurological effects (ataxia, mood changes) have been observed in human sub-populations. Microarray experiments were used to quantify the transcriptional response of cells exposed to mefloquine. Canonical pathway models containing the differentially expressed genes were automatically retrieved from the KEGG database, using recently developed software. The canonical pathway models were automatically concatenated together to form the final pathway model. The resultant pathway model was interrogated using a novel signaling control flux (SCF) algorithm that combines Boolean pseudodynamics (BPD) to relax the cumbersome steady-state assumptions of SCF. The SCF-BPD algorithm was used to identify and prioritize

  9. Applying Discourse Analysis in ELT: a Five Cs Model

    Institute of Scientific and Technical Information of China (English)

    肖巧慧

    2009-01-01

    Based on a discussion of definitions on Discourse analysis,discourse is regard as layers consist of five elements--cohesion, coherence, culture, critique and context. Moreover, we focus on applying DA in ELT.

  10. A new approach to adaptive data models

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2016-12-01

    Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.

  11. Formation Feedback Applied to Behavior-Based Approach to Formation Keeping

    Institute of Scientific and Technical Information of China (English)

    苏治宝; 陆际联

    2004-01-01

    Approaches to the study of formation keeping for multiple mobile robots are analyzed and a behavior-based robot model is built in this paper. And, a kind of coordination architecture is presented, which is similar to the infantry squad organization and is used to realize multiple mobile robots to keep formations. Simulations verify the validity of the approach to keep formation, which combines the behavior-based method and formation feedback. The effects of formation feedback on the performance of the system are analyzed.

  12. Evaluation model applied to TRANSPETRO's Marine Terminals Standardization Program

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de; Mueller, Gabriela [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Garcia, Luciano Maldonado [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes an innovative evaluation model applied to TRANSPETRO's 'Marine Terminals Standardization Program' based on updating approaches of programs evaluation and organizational learning. Since the program was launched in 2004, the need for having an evaluation model able to evaluate its implementation progress, to measure the degree of standards compliance and its potential economic, social and environmental impacts has become evident. Within a vision of safe and environmentally responsible operations of marine terminals, this evaluation model was jointly designed by TRANSPETRO and PUC-Rio to promote continuous improvement and learning in operational practices and in the standardization process itself. TRANSPETRO believes that standardization supports its services and management innovation capability by creating objective and internationally recognized parameters, targets and metrology for its business activities. The conceptual model and application guidelines for this important tool are presented in this paper, as well as the next steps towards its implementation. (author)

  13. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  14. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  15. Applying Boundary Conditions Using a Time-Dependent Lagrangian for Modeling Laser-Plasma Interactions

    Science.gov (United States)

    Reyes, J. Paxon; Shadwick, B. A.

    2015-11-01

    Describing a cold-Maxwell fluid system with a spatially-discrete, unbounded Lagrangian is problematic for numerical modeling since boundary conditions must be applied after the variational step. Accurate solutions may still be attained, but do not technically satisfy the derived energy conservation law. The size of the numerical domain, the order accuracy of the discrete approximations used, and the type of boundary conditions applied influence the behavior of the artificially-bounded system. To encode the desired boundary conditions of the equations of motion, we include time-dependent terms into the discrete Lagrangian. Although some foresight is needed to choose these time-dependent terms, this approach provides a mechanism for energy to exit the closed system while allowing the conservation law to account for the energy loss. Results of a spatially-discrete, time-dependent Lagrangian system (with approximations of second-order accuracy in space and fourth order in time) will be presented. The fields and total energy will be compared with models of the same accuracy using a time-independent variational approach as well as a non-variational approach. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY- 1104683.

  16. A Hybrid 3D Learning-and-Interaction-based Segmentation Approach Applied on CT Liver Volumes

    Directory of Open Access Journals (Sweden)

    M. Danciu

    2013-04-01

    Full Text Available Medical volume segmentation in various imaging modalities using real 3D approaches (in contrast to slice-by-slice segmentation represents an actual trend. The increase in the acquisition resolution leads to large amount of data, requiring solutions to reduce the dimensionality of the segmentation problem. In this context, the real-time interaction with the large medical data volume represents another milestone. This paper addresses the twofold problem of the 3D segmentation applied to large data sets and also describes an intuitive neuro-fuzzy trained interaction method. We present a new hybrid semi-supervised 3D segmentation, for liver volumes obtained from computer tomography scans. This is a challenging medical volume segmentation task, due to the acquisition and inter-patient variability of the liver parenchyma. The proposed solution combines a learning-based segmentation stage (employing 3D discrete cosine transform and a probabilistic support vector machine classifier with a post-processing stage (automatic and manual segmentation refinement. Optionally, an optimization of the segmentation can be achieved by level sets, using as initialization the segmentation provided by the learning-based solution. The supervised segmentation is applied on elementary cubes in which the CT volume is decomposed by tilling, thus ensuring a significant reduction of the data to be classified by the support vector machine into liver/not liver. On real volumes, the proposed approach provides good segmentation accuracy, with a significant reduction in the computational complexity.

  17. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    Science.gov (United States)

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  18. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  19. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    Science.gov (United States)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  20. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  1. FAILURES AND DEFECTS IN THE BUILDING PROCESS – APPLYING THE BOW-TIE APPROACH

    DEFF Research Database (Denmark)

    Jørgensen, Kirsten

    2009-01-01

    Function failures, defects, mistakes and poor communication are major problems for the construction sector. A Danish research project focusing on failures and defects in building processes has been carried out over the last 2 years. As the empirical element in the research, a large construction...... site was observed from the very start to the very end and all failures and defects of a certain size were recorded and analysed. The methodological approach used in this analysis was the bow-tie model from the area of safety research. It combines critical-event analysis for both causes and effects...... with event-tree analysis. The paper describes this analytical approach as an introduction to a new concept for understanding failures and defects in construction. Analysing the many critical events in the building process with the bow-tie model visualises the complexity of causes. This visualisation offers...

  2. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  3. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  4. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  5. A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft

    CERN Document Server

    Synnaeve, Gabriel

    2011-01-01

    The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. "Tech trees" or "build trees" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (com- mon in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI.

  6. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  7. The Intensive Dysphagia Rehabilitation Approach Applied to Patients With Neurogenic Dysphagia: A Case Series Design Study.

    Science.gov (United States)

    Malandraki, Georgia A; Rajappa, Akila; Kantarcigil, Cagla; Wagner, Elise; Ivey, Chandra; Youse, Kathleen

    2016-04-01

    To examine the effects of the Intensive Dysphagia Rehabilitation approach on physiological and functional swallowing outcomes in adults with neurogenic dysphagia. Intervention study; before-after trial with 4-week follow-up through an online survey. Outpatient university clinics. A consecutive sample of subjects (N=10) recruited from outpatient university clinics. All subjects were diagnosed with adult-onset neurologic injury or disease. Dysphagia diagnosis was confirmed through clinical and endoscopic swallowing evaluations. No subjects withdrew from the study. Participants completed the 4-week Intensive Dysphagia Rehabilitation protocol, including 2 oropharyngeal exercise regimens, a targeted swallowing routine using salient stimuli, and caregiver participation. Treatment included hourly sessions twice per week and home practice for approximately 45 min/d. Outcome measures assessed pre- and posttreatment included airway safety using an 8-point Penetration Aspiration Scale, lingual isometric pressures, self-reported swallowing-related quality of life (QOL), and level of oral intake. Also, patients were monitored for adverse dysphagia-related effects. QOL and adverse effects were also assessed at the 4-week follow-up (online survey). The Intensive Dysphagia Rehabilitation approach was effective in improving maximum and mean Penetration Aspiration Scale scores (PDysphagia Rehabilitation approach was safe and improved physiological and some functional swallowing outcomes in our sample; however, further investigation is needed before it can be widely applied. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  8. Unified viscoelasticity: Applying discrete element models to soft tissues with two characteristic times.

    Science.gov (United States)

    Anssari-Benam, Afshin; Bucchi, Andrea; Bader, Dan L

    2015-09-18

    Discrete element models have often been the primary tool in investigating and characterising the viscoelastic behaviour of soft tissues. However, studies have employed varied configurations of these models, based on the choice of the number of elements and the utilised formation, for different subject tissues. This approach has yielded a diverse array of viscoelastic models in the literature, each seemingly resulting in different descriptions of viscoelastic constitutive behaviour and/or stress-relaxation and creep functions. Moreover, most studies do not apply a single discrete element model to characterise both stress-relaxation and creep behaviours of tissues. The underlying assumption for this disparity is the implicit perception that the viscoelasticity of soft tissues cannot be described by a universal behaviour or law, resulting in the lack of a unified approach in the literature based on discrete element representations. This paper derives the constitutive equation for different viscoelastic models applicable to soft tissues with two characteristic times. It demonstrates that all possible configurations exhibit a unified and universal behaviour, captured by a single constitutive relationship between stress, strain and time as: σ+Aσ̇+Bσ¨=Pε̇+Qε¨. The ensuing stress-relaxation G(t) and creep J(t) functions are also unified and universal, derived as [Formula: see text] and J(t)=c2+(ε0-c2)e(-PQt)+σ0Pt, respectively. Application of these relationships to experimental data is illustrated for various tissues including the aortic valve, ligament and cerebral artery. The unified model presented in this paper may be applied to all tissues with two characteristic times, obviating the need for employing varied configurations of discrete element models in preliminary investigation of the viscoelastic behaviour of soft tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optimization strategies in the modelling of SG-SMB applied to separation of phenylalanine and tryptophan

    Science.gov (United States)

    Diógenes Tavares Câmara, Leôncio

    2014-03-01

    The solvent-gradient simulated moving bed process (SG-SMB) is the new tendency in the performance improvement if compared to the traditional isocratic solvent conditions. In such SG-SMB process the modulation of the solvent strength leads to significant increase in the purities and productivity followed by reduction in the solvent consumption. A stepwise modelling approach was utilized in the representation of the interconnected chromatographic columns of the system combined with a lumped mass transfer model between the solid and liquid phase. The influence of the solvent modifier was considered applying the Abel model which takes into account the effect of modifier volume fraction over the partition coefficient. Correlation models of the mass transfer parameters were obtained through the retention times of the solutes according to the volume fraction of modifier. The modelling and simulations were carried out and compared to the experimental SG-SMB separation unit of the amino acids Phenylalanine and Tryptophan. The simulation results showed the great potential of the proposed modelling approach in the representation of such complex systems. The simulations showed great agreement fitting the experimental data of the amino acids concentrations both at the extract as well as at the raffinate. A new optimization strategy was proposed in the determination of the best operating conditions which uses the phi-plot concept.

  10. Approaches of Health Risk Assessment for Heavy Metals Applied in China and Advance in Exposure Assessment Models:A Review%健康风险评估方法在中国重金属污染中的应用及暴露评估模型的研究进展

    Institute of Scientific and Technical Information of China (English)

    刘蕊; 张辉; 勾昕; 罗绪强; 杨鸿雁

    2014-01-01

    经济的快速发展导致中国环境质量日趋恶化。随着健康意识的增强,人们越来越重视污染物暴露人群的健康风险评估。与其他污染物相比,重金属污染区域广,重金属暴露人群多且集中。为了研究重金属暴露条件下人群的健康风险, USEPA 模型、统计模型、地理信息系统、可给性研究的方法已被中国不同学者应用。暴露评估模型作为污染物暴露人群健康风险评估的主要环节,国外的研究已经比较成熟,但相关研究在中国还处于空白阶段。对中国近年来在城市表层土壤(灰尘)、矿区土壤、膳食、地下水和饮用水、大气颗粒物进行重金属风险评估中应用的健康风险评估方法,进行了归纳和评述,并对欧美常用暴露评估模型:环境暴露评估模型、膳食暴露评估模型进行了介绍。中国健康风险评估工作起步晚,在评估的各环节均存在很大缺陷。随着新技术的发展以及人群对环境健康风险认识的深化,健康风险评估将成为中国热门研究领域之一。污染的环境行为、剂量-效应关系、模型、风险信息等方面,将是未来中国健康风险评估研究的重点。%With the increasingly development of economy, the environmental quality in China is becoming severely worse and worse. Much more attention to health risk assessment for human exposed by contaminations is paid due to the increasing healthy awareness. Compared to other contaminations, larger area and more people are contaminated by heavy metals. In order to research the health risk for human exposed by heavy metals, USEPA model, statistical models, GIS and bioavailability have been applied in China. Alt-hough exposure assessment model, as a key part of health risk assessment, has been widely studied in other countries, the relevant report is limited in China. This paper summarize both the approaches of health risk assessment for heavy

  11. Adequateness of applying the Zmijewski model on Serbian companies

    Directory of Open Access Journals (Sweden)

    Pavlović Vladan

    2012-12-01

    Full Text Available The aim of the paper is to determine the accuracy of the prediction of Zmijewski model in Serbia on the eligible sample. At the same time, the paper identifies model's strengths, weaknesses and limitations of its possible application. Bearing in mind that the economic environment in Serbia is not similar to the United States at the time the model was developed, Zmijewski model is surprisingly accurate in the case of Serbian companies. The accuracy was slightly weaker than the model results in the U.S. in its original form, but much better than the results model gave in the U.S. in the period 1988-1991, and 1992-1999. Model gave also better results in Serbia comparing those in Croatia, even in Croatia model was adjusted.

  12. Atomistic Method Applied to Computational Modeling of Surface Alloys

    Science.gov (United States)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  13. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  14. Applying the Job Characteristics Model to the College Education Experience

    Science.gov (United States)

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  15. Applying forces to elastic network models of large biomolecules using a haptic feedback device.

    Science.gov (United States)

    Stocks, M B; Laycock, S D; Hayward, S

    2011-03-01

    Elastic network models of biomolecules have proved to be relatively good at predicting global conformational changes particularly in large systems. Software that facilitates rapid and intuitive exploration of conformational change in elastic network models of large biomolecules in response to externally applied forces would therefore be of considerable use, particularly if the forces mimic those that arise in the interaction with a functional ligand. We have developed software that enables a user to apply forces to individual atoms of an elastic network model of a biomolecule through a haptic feedback device or a mouse. With a haptic feedback device the user feels the response to the applied force whilst seeing the biomolecule deform on the screen. Prior to the interactive session normal mode analysis is performed, or pre-calculated normal mode eigenvalues and eigenvectors are loaded. For large molecules this allows the memory and number of calculations to be reduced by employing the idea of the important subspace, a relatively small space of the first M lowest frequency normal mode eigenvectors within which a large proportion of the total fluctuation occurs. Using this approach it was possible to study GroEL on a standard PC as even though only 2.3% of the total number of eigenvectors could be used, they accounted for 50% of the total fluctuation. User testing has shown that the haptic version allows for much more rapid and intuitive exploration of the molecule than the mouse version.

  16. Applying XML for designing and interchanging information for multidimensional model

    Institute of Scientific and Technical Information of China (English)

    Lu Changhui; Deng Su; Zhang Weiming

    2005-01-01

    In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on XML and its DTD is defined, which can perfectly describe various semantic characteristics of multidimensional conceptual model. According to the multidimensional conceptual modeling technique which is based on UML, the mapping algorithm between the multidimensional conceptual model is described based on XML and UML class diagram, and an application base for the wide use of this technique is given.

  17. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  18. Nonlinear models applied to seed germination of Rhipsalis cereuscula Haw (Cactaceae

    Directory of Open Access Journals (Sweden)

    Terezinha Aparecida Guedes

    2014-09-01

    Full Text Available The objective of this analysis was to fit germination data of Rhipsalis cereuscula Haw seeds to the Weibull model with three parameters using Frequentist and Bayesian methods. Five parameterizations were compared using the Bayesian analysis to fit a prior distribution. The parameter estimates from the Frequentist method were similar to the Bayesian responses considering the following non-informative a priori distribution for the parameter vectors: gamma (10³, 10³ in the model M1, normal (0, 106 in the model M2, uniform (0, Lsup in the model M3, exp (μ in the model M4 and Lnormal (μ, 106 in the model M5. However, to achieve the convergence in the models M4 and M5, we applied the μ from the estimates of the Frequentist approach. The best models fitted by the Bayesian method were the M1 and M3. The adequacy of these models was based on the advantages over the Frequentist method such as the reduced computational efforts and the possibility of comparison.

  19. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang Xinxin [Harbin Engineering University, Harbin (China)

    2014-08-15

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented.

  20. A decision model applied to alcohol effects on driver signal light behavior

    Science.gov (United States)

    Schwartz, S. H.; Allen, R. W.

    1978-01-01

    A decision model including perceptual noise or inconsistency is developed from expected value theory to explain driver stop and go decisions at signaled intersections. The model is applied to behavior in a car simulation and instrumented vehicle. Objective and subjective changes in driver decision making were measured with changes in blood alcohol concentration (BAC). Treatment levels averaged 0.00, 0.10 and 0.14 BAC for a total of 26 male subjects. Data were taken for drivers approaching signal lights at three timing configurations. The correlation between model predictions and behavior was highly significant. In contrast to previous research, analysis indicates that increased BAC results in increased perceptual inconsistency, which is the primary cause of increased risk taking at low probability of success signal lights.

  1. Verification of short lead time forecast models: applied to Kp and Dst forecasting

    Science.gov (United States)

    Wintoft, Peter; Wik, Magnus

    2016-04-01

    In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.

  2. Skill-Based Approach Applied to Gifted Students, its Potential in Latin America

    Directory of Open Access Journals (Sweden)

    Andrew Alexi Almazán-Anaya

    2015-09-01

    Full Text Available This paper presents, as a reflective essay, the current educational situation of gifted students (with more intelligence than the average in Latin America and the possibility of using skill-based education within differentiated programs (intended for gifted individuals, a sector where scarce scientific studies have been done and a consensus of an ideal educative model has not been reached yet. Currently these students, in general, lack of specialized educational assistance intended to identify and develop their cognitive abilities, so it is estimated that a high percentage (95% of such population is not detected in the traditional education system. Although there are differentiated education models, they are rarely applied. A student-centered education program is a solution proposed to apply this pedagogical model and cover such population. The characteristics of this program that do support differentiated instruction for gifted individuals compatible with experiences in the US, Europe and Latin America are analyzed. Finally, this paper concludes with an analysis of possible research areas that, if explored in the future, would help us to find answers about the feasibility and relation between skill-based programs and differentiated education for gifted students.

  3. Applying Model Checking to Industrial-Sized PLC Programs

    CERN Document Server

    AUTHOR|(CDS)2079190; Darvas, Daniel; Blanco Vinuela, Enrique; Tournier, Jean-Charles; Bliudze, Simon; Blech, Jan Olaf; Gonzalez Suarez, Victor M

    2015-01-01

    Programmable logic controllers (PLCs) are embedded computers widely used in industrial control systems. Ensuring that a PLC software complies with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of safety-critical software but is still underused in industry due to the complexity of building and managing formal models of real applications. In this paper, we propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (\\eg CTL, LTL) on PLC programs. This methodology is based on an intermediate model (IM), meant to transform PLC programs written in various standard languages (ST, SFC, etc.) to different modeling languages of verification tools. We present the syntax and semantics of the IM and the transformation rules of the ST and SFC languages to the nuXmv model checker passing through the intermediate model. Finally, two real cases studies of \\CERN PLC programs, written mainly in th...

  4. Applying Functional Modeling for Accident Management of Nucler Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigates applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented....

  5. A human rights-consistent approach to multidimensional welfare measurement applied to sub-Saharan Africa

    DEFF Research Database (Denmark)

    Arndt, Channing; Mahrt, Kristi; Hussain, Azhar

    The rights-based approach to development targets progress towards the realization of 30 articles set forth in the Universal Declaration of Human Rights. Progress is frequently measured using the multidimensional poverty index. While elegant and useful, the multidimensional poverty index...... is in reality inconsistent with the Universal Declaration of Human Rights principles of indivisibility, inalienability, and equality. We show that a first-order dominance methodology maintains consistency with basic principles, discuss the properties of the multidimensional poverty index and first......-order dominance, and apply the measures to 26 African countries. We conclude that the multidimensional poverty index and first-order dominance are useful complements that should be employed in tandem....

  6. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...

  7. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  8. Applying Hybrid Heuristic Approach to Identify Contaminant Source Information in Transient Groundwater Flow Systems

    Directory of Open Access Journals (Sweden)

    Hund-Der Yeh

    2014-01-01

    Full Text Available Simultaneous identification of the source location and release history in aquifers is complicated and time-consuming if the release of groundwater contaminant source varies in time. This paper presents an approach called SATSO-GWT to solve complicated source release problems which contain the unknowns of three location coordinates and several irregular release periods and concentrations. The SATSO-GWT combines with ordinal optimization algorithm (OOA, roulette wheel approach, and a source identification algorithm called SATS-GWT. The SATS-GWT was developed based on simulated annealing, tabu search, and three-dimensional groundwater flow and solute transport model MD2K-GWT. The OOA and roulette wheel method are utilized mainly to reduce the size of feasible solution domain and accelerate the identification of the source information. A hypothetic site with one contaminant source location and two release periods is designed to assess the applicability of the present approach. The results indicate that the performance of SATSO-GWT is superior to that of SATS-GWT. In addition, the present approach works very effectively in dealing with the cases which have different initial guesses of source location and measurement errors in the monitoring points as well as problems with large suspicious areas and several source release periods and concentrations.

  9. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    Directory of Open Access Journals (Sweden)

    Eser ÖRDEM

    2013-06-01

    Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue

  10. Operative terminology and post-operative management approaches applied to hepatic surgery: Trainee perspectives.

    Science.gov (United States)

    Farid, Shahid G; Prasad, K Rajendra; Morris-Stiff, Gareth

    2013-05-27

    Outcomes in hepatic resectional surgery (HRS) have improved as a result of advances in the understanding of hepatic anatomy, improved surgical techniques, and enhanced peri-operative management. Patients are generally cared for in specialist higher-level ward settings with multidisciplinary input during the initial post-operative period, however, greater acceptance and understanding of HRS has meant that care is transferred, usually after 24-48 h, to a standard ward environment. Surgical trainees will be presented with such patients either electively as part of a hepatobiliary firm or whilst covering the service on-call, and it is therefore important to acknowledge the key points in managing HRS patients. Understanding the applied anatomy of the liver is the key to determining the extent of resection to be undertaken. Increasingly, enhanced patient pathways exist in the post-operative setting requiring focus on the delivery of high quality analgesia, careful fluid balance, nutrition and thromboprophlaxis. Complications can occur including liver, renal and respiratory failure, hemorrhage, and sepsis, all of which require prompt recognition and management. We provide an overview of the relevant terminology applied to hepatic surgery, an approach to the post-operative management, and an aid to developing an awareness of complications so as to facilitate better confidence in this complex subgroup of general surgical patients.

  11. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  12. Applying reliability models to the maintenance of Space Shuttle software

    Science.gov (United States)

    Schneidewind, Norman F.

    1992-01-01

    Software reliability models provide the software manager with a powerful tool for predicting, controlling, and assessing the reliability of software during maintenance. We show how a reliability model can be effectively employed for reliability prediction and the development of maintenance strategies using the Space Shuttle Primary Avionics Software Subsystem as an example.

  13. Trailing edge noise model applied to wind turbine airfoils

    DEFF Research Database (Denmark)

    Bertagnolio, Franck

    The aim of this work is firstly to provide a quick introduction to the theory of noise generation that are relevant to wind turbine technology with focus on trailing edge noise. Secondly, the socalled TNO trailing edge noise model developed by Parchen [1] is described in more details. The model...

  14. Hydrologic and water quality terminology as applied to modeling

    Science.gov (United States)

    A survey of literature and examination in particular of terminology use in a previous special collection of modeling calibration and validation papers has been conducted to arrive at a list of consistent terminology recommended for writing about hydrologic and water quality model calibration and val...

  15. Applying the General Linear Model to Repeated Measures Problems.

    Science.gov (United States)

    Pohlmann, John T.; McShane, Michael G.

    The purpose of this paper is to demonstrate the use of the general linear model (GLM) in problems with repeated measures on a dependent variable. Such problems include pretest-posttest designs, multitrial designs, and groups by trials designs. For each of these designs, a GLM analysis is demonstrated wherein full models are formed and restrictions…

  16. Community Mobilization Model Applied to Support Grandparents Raising Grandchildren

    Science.gov (United States)

    Miller, Jacque; Bruce, Ann; Bundy-Fazioli, Kimberly; Fruhauf, Christine A.

    2010-01-01

    This article discusses the application of a community mobilization model through a case study of one community's response to address the needs of grandparents raising grandchildren. The community mobilization model presented is one that is replicable in addressing diverse community identified issues. Discussed is the building of the partnerships,…

  17. Modeling diffuse pollution with a distributed approach.

    Science.gov (United States)

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  18. Mathematic simulation of soil-vegetation condition and land use structure applying basin approach

    Science.gov (United States)

    Mishchenko, Natalia; Shirkin, Leonid; Krasnoshchekov, Alexey

    2016-04-01

    Ecosystems anthropogenic transformation is basically connected to the changes of land use structure and human impact on soil fertility. The Research objective is to simulate the stationary state of river basins ecosystems. Materials and Methods. Basin approach has been applied in the research. Small rivers basins of the Klyazma river have been chosen as our research objects. They are situated in the central part of the Russian plain. The analysis is carried out applying integrated characteristics of ecosystems functioning and mathematic simulation methods. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Results. Mathematic simulation resulted in defining possible permanent conditions of "phytocenosis-soil" system in coordinates of phytomass, phytoproductivity, humus percentage in soil. Ecosystem productivity is determined not only by vegetation photosynthesis activity but also by the area ratio of forest and meadow phytocenosis. Local maximums attached to certain phytomass areas and humus content in soil have been defined on the basin phytoproductivity distribution diagram. We explain the local maximum by synergetic effect. It appears with the definite ratio of forest and meadow phytocenosis. In this case, utmost values of phytomass for the whole area are higher than just a sum of utmost values of phytomass for the forest and meadow phytocenosis. Efficient correlation of natural forest and meadow phytocenosis has been defined for the Klyazma river. Conclusion. Mathematic simulation methods assist in forecasting the ecosystem conditions under various changes of land use structure. Nowadays overgrowing of the abandoned agricultural lands is very actual for the Russian Federation. Simulation results demonstrate that natural ratio of forest and meadow phytocenosis for the area will restore during agricultural overgrowing.

  19. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  20. On applying the extended intrinsic mean spin tensor to modelling the turbulence in non-inertial frames of reference

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Modelling the turbulent flows in non-inertial frames of reference has long been a challenging task. Recently we introduced the notion of the "extended intrinsic mean spin tensor" for turbulence modelling and pointed out that, when applying the Reynolds stress models developed in the inertial frame of reference to model-ling the turbulence in a non-inertial frame of reference, the mean spin tensor should be replaced by the extended intrinsic mean spin tensor to correctly account for the rotation effects induced by the non-inertial frame of reference, to conform in phys-ics with the Reynolds stress transport equation. To exemplify the approach, we conducted numerical simulations of the fully developed turbulent channel flow in a rotating frame of reference by employing four non-linear K-ε models. Our numerical results based on this approach at a wide range of Reynolds and Rossby numbers evince that, among the models tested, the non-linear K-ε model of Huang and Ma and the non-linear K-ε model of Craft, Launder and Suga can better capture the rotation effects and the resulting influence on the structures of turbulence, and therefore are satisfactorily applied to dealing with the turbulent flows of practical interest in engineering. The general approach worked out in this paper is also ap-plied to the second-moment closure and the large-eddy simulation of turbulence.

  1. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class...... through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM) are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens......, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics...

  2. A systematic approach for fine-tuning of fuzzy controllers applied to WWTPs

    DEFF Research Database (Denmark)

    Ruano, M.V.; Ribes, J.; Sin, Gürkan;

    2010-01-01

    A systematic approach for fine-tuning fuzzy controllers has been developed and evaluated for an aeration control system implemented in a WWTR The challenge with the application of fuzzy controllers to WWTPs is simply that they contain many parameters, which need to be adjusted for different WWTP...... applications. To this end, a methodology based on model simulations is used that employs three statistical methods: (i) Monte-Carlo procedure: to find proper initial conditions, (ii) Identifiability analysis: to find an identifiable parameter subset of the fuzzy controller and (iii) minimization algorithm......: to fine-tune the identifiable parameter subset of the controller. Indeed, the initial location found by Monte-Carlo simulations provided better results than using trial and error approach when identifying parameters of the fuzzy controller. The identifiable subset was reduced to 4 parameters from a total...

  3. MODULAR APPROACH WITH ROUGH DECISION MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-09-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  4. Modular Approach with Rough Decision Models

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-10-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  5. Applying the Fraud Triangle Model to the Global Credit Crisis

    Directory of Open Access Journals (Sweden)

    Ranulph Day

    2010-03-01

    Full Text Available The premise of this paper is that the unwarranted corporate collapses and failures which occurred during the, currently ongoing, ‘credit crisis’ arise from failures in the decision making processes of the organisation. This paper is written primarily from a legal corporate governance perspective and looks at how the law could allow, what in hindsight appears to be, staggering follies. As such this paper is focussed on the microeconomics of the debacle rather than upon the macroeconomic triggers. The rationale for this approach is that the law cannot regulate behaviour on a mass scale, law acts against the individual rather than the group. There are various reasons for this assumption ranging from the necessity of justice and fairness to practical logistics. However it is the working assumption of this paper that for law to be effective it has to act against individuals and so can only pursue a microeconomic approach.

  6. Automatic spline-smoothing approach applied to denoise Moroccan resistivity data phosphate deposit “disturbances” map

    Directory of Open Access Journals (Sweden)

    Saad Bakkali

    2010-04-01

    Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.

  7. A new HBV-model applied to an arctic watershed

    Energy Technology Data Exchange (ETDEWEB)

    Bruland, O.

    1995-12-31

    This paper describes the HBV-model, which was developed in the Nordic joint venture project ``Climate change and energy production``. The HBV-model is a precipitation-runoff model made mainly to create runoff forecasts for hydroelectric power plants. The model has been tested in an arctic watershed, the Bayelva drainage basin at Svalbard. The model was calibrated by means of data for the period 1989-1993 and tested on data for the period 1974-1978. For both periods, snow melt, rainfall and glacier melt events are well predicted. The largest disagreement between observed and simulated runoff occurred on warm days with heavy rain. This may be due to the precipitation measurements which may not be representative for such events. Measurements show a larger negative glacier mass balance than the simulated one although the parameters controlling the glacier melt in the model are set high. Glacier mass balance simulations in which the temperature index depends on albedo and radiation are more correct and improve model efficiency. 5 refs., 4 figs., 1 table

  8. Modeling approach suitable for energy system

    Energy Technology Data Exchange (ETDEWEB)

    Goetschel, D. V.

    1979-01-01

    Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.

  9. Blue sky catastrophe as applied to modeling of cardiac rhythms

    Science.gov (United States)

    Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.

    2015-07-01

    A new mathematical model for the electrical activity of the heart is proposed. The model represents a special singularly perturbed three-dimensional system of ordinary differential equations with one fast and two slow variables. A characteristic feature of the system is that its solution performs nonclassical relaxation oscillations and simultaneously undergoes a blue sky catastrophe bifurcation. Both these factors make it possible to achieve a phenomenological proximity between the time dependence of the fast component in the model and an ECG of the human heart.

  10. Simple queueing model applied to the city of Portland

    Energy Technology Data Exchange (ETDEWEB)

    Simon, P.M.; Nagel, K. [Los Alamos National Lab., NM (United States)]|[Santa Fe Inst., NM (United States)

    1998-07-31

    The authors present a simple traffic micro-simulation model that models the effects of capacity cut-off, i.e. the effect of queue built-up when demand is exceeding capacity, and queue spillback, i.e. the effect that queues can spill back across intersections when a congested link is filled up. They derive the model`s fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20,000 links). Demand is generated by a simplified home-to-work assignment which generates about half a million trips for the AM peak. Route assignment is done by iterative feedback between micro-simulation and router. Relaxation of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation.

  11. Lithospheric structure models applied for locating the Romanian seismic events

    Directory of Open Access Journals (Sweden)

    V. Oancea

    1994-06-01

    Full Text Available The paper presents our attempts made for improving the locations obtained for local seismic events, using refined lithospheric structure models. The location program (based on Geiger method supposes a known model. The program is run for some seismic sequences which occurred in different regions, on the Romanian territory, using for each of the sequences three velocity models: 1 7 layers of constant velocity of seismic waves, as an average structure of the lithosphere for the whole territory; 2 site dependent structure (below each station, based on geophysical and geological information on the crust; 3 curves deseribing the dependence of propagation velocities with depth in the lithosphere, characterizing the 7 structural units delineated on the Romanian territory. The results obtained using the different velocity models are compared. Station corrections are computed for each data set. Finally, the locations determined for some quarry blasts are compared with the real ones.

  12. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  13. Joint regression analysis and AMMI model applied to oat improvement

    Science.gov (United States)

    Oliveira, A.; Oliveira, T. A.; Mejza, S.

    2012-09-01

    In our work we present an application of some biometrical methods useful in genotype stability evaluation, namely AMMI model, Joint Regression Analysis (JRA) and multiple comparison tests. A genotype stability analysis of oat (Avena Sativa L.) grain yield was carried out using data of the Portuguese Plant Breeding Board, sample of the 22 different genotypes during the years 2002, 2003 and 2004 in six locations. In Ferreira et al. (2006) the authors state the relevance of the regression models and of the Additive Main Effects and Multiplicative Interactions (AMMI) model, to study and to estimate phenotypic stability effects. As computational techniques we use the Zigzag algorithm to estimate the regression coefficients and the agricolae-package available in R software for AMMI model analysis.

  14. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...

  15. Tensegrity applied to modelling the motion of viruses

    Institute of Scientific and Technical Information of China (English)

    Cretu Simona-Mariana; Brinzan Gabriela-Catalina

    2011-01-01

    A considerable number of viruses' structures have been discovered and more are expected to be identified. Different viruses' symmetries can be observed at the nanoscale level. The mechanical models of some viruses realised by scientists are described in this paper, none of which has taken into consideration the internal deformation of subsystems.The authors' models for some viruses' elements are introduced, with rigid and flexible links, which reproduce the movements of viruses including internal deformations of the subunits.

  16. Availability modeling methodology applied to solar power systems

    Science.gov (United States)

    Unione, A.; Burns, E.; Husseiny, A.

    1981-01-01

    Availability is discussed as a measure for estimating the expected performance for solar- and wind-powered generation systems and for identifying causes of performance loss. Applicable analysis techniques, ranging from simple system models to probabilistic fault tree analysis, are reviewed. A methodology incorporating typical availability models is developed for estimating reliable plant capacity. Examples illustrating the impact of design and configurational differences on the expected capacity of a solar-thermal power plant with a fossil-fired backup unit are given.

  17. A model of provenance applied to biodiversity datasets

    OpenAIRE

    Amanqui, Flor K; De Nies, Tom; Dimou, Anastasia; Verborgh, Ruben; Mannens, Erik; Van De Walle, Rik; Moreira, Dilvan

    2016-01-01

    Nowadays, the Web has become one of the main sources of biodiversity information. An increasing number of biodiversity research institutions add new specimens and their related information to their biological collections and make this information available on the Web. However, mechanisms which are currently available provide insufficient provenance of biodiversity information. In this paper, we propose a new biodiversity provenance model extending the W3C PROV Data Model. Biodiversity data is...

  18. Fleet Replacement Squadron consolidation : a cost model applied.

    OpenAIRE

    Maholchic, Robert M.

    1991-01-01

    The consolidation of Fleet Replacement Squadrons (FRS) represents one method of achieving planned force reductions. This thesis utilizes the Cost of Base Realignment Actions (COBRA) cost model to develop cost estimates for determination of the cost effective site location. The A-6 FRS consolidation is used as a case study. Data were compiled using completed Functional Wing studies as well as local information sources. A comparison between the cost estimates provided by the COBRA cost model fo...

  19. The J3 SCR model applied to resonant converter simulation

    Science.gov (United States)

    Avant, R. L.; Lee, F. C. Y.

    1985-01-01

    The J3 SCR model is a continuous topology computer model for the SCR. Its circuit analog and parameter estimation procedure are uniformly applicable to popular computer-aided design and analysis programs such as SPICE2 and SCEPTRE. The circuit analog is based on the intrinsic three pn junction structure of the SCR. The parameter estimation procedure requires only manufacturer's specification sheet quantities as a data base.

  20. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    Science.gov (United States)

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  1. Applying OGC Standards to Develop a Land Surveying Measurement Model

    Directory of Open Access Journals (Sweden)

    Ioannis Sofos

    2017-02-01

    Full Text Available The Open Geospatial Consortium (OGC is committed to developing quality open standards for the global geospatial community, thus enhancing the interoperability of geographic information. In the domain of sensor networks, the Sensor Web Enablement (SWE initiative has been developed to define the necessary context by introducing modeling standards, like ‘Observation & Measurement’ (O&M and services to provide interaction like ‘Sensor Observation Service’ (SOS. Land surveying measurements on the other hand comprise a domain where observation information structures and services have not been aligned to the OGC observation model. In this paper, an OGC-compatible, aligned to the ‘Observation and Measurements’ standard, model for land surveying observations has been developed and discussed. Furthermore, a case study instantiates the above model, and an SOS implementation has been developed based on the 52° North SOS platform. Finally, a visualization schema is used to produce ‘Web Map Service (WMS’ observation maps. Even though there are elements that differentiate this work from classic ‘O&M’ modeling cases, the proposed model and flows are developed in order to provide the benefits of standardizing land surveying measurement data (cost reducing by reusability, higher precision level, data fusion of multiple sources, raw observation spatiotemporal repository access, development of Measurement-Based GIS (MBGIS to the geoinformation community.

  2. Stormwater infiltration trenches: a conceptual modelling approach.

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare

    2009-01-01

    In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.

  3. Challenges in structural approaches to cell modeling.

    Science.gov (United States)

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Applying artificial vision models to human scene understanding

    Directory of Open Access Journals (Sweden)

    Elissa Michele Aminoff

    2015-02-01

    Full Text Available How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective – the parahippocampal/lingual region (PPA, the retrosplenial complex (RSC, and the occipital place area (TOS – have typically focused on single visual dimensions (e.g., size, rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: 1 BOLD activity in scene-selective brain regions; 2 behavioral measured judgments of visually-perceived scene similarity; and 3 several different computer vision models. These correlations revealed: 1 models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; 2 NEIL and SUN – the models that best accounted for the patterns obtained from PPA and TOS – were different from the GIST model that best accounted for the pattern obtained from RSC; 3 The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method – NEIL (Never-Ending-Image-Learner, which incorporates visual features learned as statistical regularities across web-scale numbers of scenes – showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.

  5. On applying the extended intrinsic mean spin tensor to modelling the turbulence in non-inertial frames of reference

    Institute of Scientific and Technical Information of China (English)

    HUANG YuNing; MA HuiYang; XU JingLei

    2008-01-01

    Modelling the turbulent flows in non-inertial frames of reference has long been a challenging task. Recently we introduced the notion of the "extended intrinsic mean spin tensor" for turbulence modelling and pointed out that, when applying the Reynolds stress models developed in the inertial frame of reference to model-ling the turbulence in a non-inertial frame of reference, the mean spin tensor should be replaced by the extended intrinsic mean spin tensor to correctly account for the rotation effects induced by the non-inertial frame of reference, to conform in phys-ics with the Reynolds stress transport equation. To exemplify the approach, we conducted numerical simulations of the fully developed turbulent channel flow in a rotating frame of reference by employing four non-linear K-εmodels. Our numerical results based on this approach at a wide range of Reynolds and Rossby numbers evince that, among the models tested, the non-linear K-ε model of Huang and Ma and the non-linear K-ε model of Craft, Launder and Suga can better capture the rotation effects and the resulting influence on the structures of turbulence, and therefore are satisfactorily applied to dealing with the turbulent flows of practical interest in engineering. The general approach worked out in this paper is also ap-plied to the second-moment closure and the large-eddy simulation of turbulence.

  6. Building Water Models, A Different Approach

    CERN Document Server

    Izadi, Saeed; Onufriev, Alexey V

    2014-01-01

    Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...

  7. Shrinking core models applied to the sodium silicate production process

    Directory of Open Access Journals (Sweden)

    Stanković Mirjana S.

    2007-01-01

    Full Text Available The sodium silicate production process, with the molar ratio SiO2/Na2O = 2, for detergent zeolite 4A production, is based on quartz sand dissolving in NaOH aqueous solution, with a specific molality. It is a complex process performed at high temperature and pressure. It is of vital importance to develop adequate mathematical models, which are able to predict the dynamical response of the process parameters. A few kinetic models were developed within this study, which were adjusted and later compared to experimental results. It was assumed that SiO2 particles are smooth spheres, with uniform diameter. This diameter decreases during dissolving. The influence of particle diameter, working temperature and hydroxide ion molality on the dissolution kinetics was investigated. It was concluded that the developed models are sufficiently correct, in the engineering sense, and can be used for the dynamical prediction of process parameters.

  8. Jellium-with-gap model applied to semilocal kinetic functionals

    Science.gov (United States)

    Constantin, Lucian A.; Fabiano, Eduardo; Śmiga, Szymon; Della Sala, Fabio

    2017-03-01

    We investigate a highly nonlocal generalization of the Lindhard function, given by the jellium-with-gap model. We find a band-gap-dependent gradient expansion of the kinetic energy, which performs noticeably well for large atoms. Using the static linear response theory and the simplest semilocal model for the local band gap, we derive a nonempirical generalized gradient approximation (GGA) of the kinetic energy. This GGA kinetic-energy functional is remarkably accurate for the description of weakly interacting molecular systems within the subsystem formulation of density functional theory.

  9. Combustion and flow modelling applied to the OMV VTE

    Science.gov (United States)

    Larosiliere, Louis M.; Jeng, San-Mou

    1990-01-01

    A predictive tool for hypergolic bipropellant spray combustion and flow evolution in the OMV VTE (orbital maneuvering vehicle variable thrust engine) is described. It encompasses a computational technique for the gas phase governing equations, a discrete particle method for liquid bipropellant sprays, and constitutive models for combustion chemistry, interphase exchanges, and unlike impinging liquid hypergolic stream interactions. Emphasis is placed on the phenomenological modelling of the hypergolic liquid bipropellant gasification processes. An application to the OMV VTE combustion chamber is given in order to show some of the capabilities and inadequacies of this tool.

  10. Mercury's geochronology revised by applying Model Production Functions to Mariner 10 data: geological implications

    CERN Document Server

    Massironi, M; Marchi, S; Martellato, M; Mottola, M; Wagner, R J

    2009-01-01

    Model Production Function chronology uses dynamic models of the Main Belt Asteroids (MBAs) and Near Earth Objects (NEOs) to derive the impactor flux to a target body. This is converted into the crater size-frequency-distribution for a specific planetary surface, and calibrated using the radiometric ages of different regions of the Moon's surface. This new approach has been applied to the crater counts on Mariner 10 images of the highlands and of several large impact basins on Mercury. MPF estimates for the plains show younger ages than those of previous chronologies. Assuming a variable uppermost layering of the Hermean crust, the age of the Caloris interior plains may be as young as 3.59 Ga, in agreement with MESSENGER results that imply that long-term volcanism overcame contractional tectonics. The MPF chronology also suggests a variable projectile flux through time, coherent with the MBAs for ancient periods and then gradually comparable also to the NEOs.

  11. A Decision-Making Model Applied to Career Counseling.

    Science.gov (United States)

    Olson, Christine; And Others

    1990-01-01

    A four-component model for career decision-making counseling relates each component to assessment questions and appropriate intervention strategies. The components are (1) conceptualization (definition of the problem); (2) enlargement of response repertoire (generation of alternatives); (3) identification of discriminative stimuli (consequences of…

  12. Modeling of diffuse molecular gas applied to HD 102065 observations

    CERN Document Server

    Nehme, Cyrine; Boulanger, Francois; Forets, Guillaume Pineau des; Gry, Cecile

    2008-01-01

    Aims. We model a diffuse molecular cloud present along the line of sight to the star HD 102065. We compare our modeling with observations to test our understanding of physical conditions and chemistry in diffuse molecular clouds. Methods. We analyze an extensive set of spectroscopic observations which characterize the diffuse molecular cloud observed toward HD 102065. Absorption observations provide the extinction curve, H2, C I, CO, CH, and CH+ column densities and excitation. These data are complemented by observations of CII, CO and dust emission. Physical conditions are determined using the Meudon PDR model of UV illuminated gas. Results. We find that all observational results, except column densities of CH, CH+ and H2 in its excited (J > 2) levels, are consistent with a cloud model implying a Galactic radiation field (G~0.4 in Draine's unit), a density of 80 cm-3 and a temperature (60-80 K) set by the equilibrium between heating and cooling processes. To account for excited (J >2) H2 levels column densit...

  13. Applied Bounded Model Checking for Interlocking System Designs

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan; Pinger, Ralf

    2014-01-01

    of behavioural (operational) semantics. The former checks that the plant model – that is, the software components reflecting the physical components of the interlocking system – has been set up in an adequate way. The latter investigates trains moving through the network, with the objective to uncover potential...

  14. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  15. Polarimetric SAR interferometry applied to land ice: modeling

    DEFF Research Database (Denmark)

    Dall, Jørgen; Papathanassiou, Konstantinos; Skriver, Henning

    2004-01-01

    This paper introduces a few simple scattering models intended for the application of polarimetric SAR interfer-ometry to land ice. The principal aim is to eliminate the penetration bias hampering ice sheet elevation maps generated with single-channel SAR interferometry. The polarimetric coherent...

  16. Robust model identification applied to type 1diabetes

    DEFF Research Database (Denmark)

    Finan, Daniel Aaron; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;

    2010-01-01

    In many realistic applications, process noise is known to be neither white nor normally distributed. When identifying models in these cases, it may be more effective to minimize a different penalty function than the standard sum of squared errors (as in a least-squares identification method...

  17. Applying an Employee-Motivation Model to Prevent Student Plagiarism.

    Science.gov (United States)

    Malouff, John M.; Sims, Randi L.

    1996-01-01

    A model based on Vroom's expectancy theory of employee motivation posits that instructors can prevent plagiarism by ensuring that students understand the rules of ethical writing, expect assignments to be manageable and have personal benefits, and expect plagiarism to be difficult and have important personal costs. (SK)

  18. Applying an Employee-Motivation Model to Prevent Student Plagiarism.

    Science.gov (United States)

    Malouff, John M.; Sims, Randi L.

    1996-01-01

    A model based on Vroom's expectancy theory of employee motivation posits that instructors can prevent plagiarism by ensuring that students understand the rules of ethical writing, expect assignments to be manageable and have personal benefits, and expect plagiarism to be difficult and have important personal costs. (SK)

  19. Dynamics Model Applied to Pricing Options with Uncertain Volatility

    Directory of Open Access Journals (Sweden)

    Lorella Fatone

    2012-01-01

    model is proposed. The data used to test the calibration problem included observations of asset prices over a finite set of (known equispaced discrete time values. Statistical tests were used to estimate the statistical significance of the two parameters of the Black-Scholes model: the volatility and the drift. The effects of these estimates on the option pricing problem were investigated. In particular, the pricing of an option with uncertain volatility in the Black-Scholes framework was revisited, and a statistical significance was associated with the price intervals determined using the Black-Scholes-Barenblatt equations. Numerical experiments involving synthetic and real data were presented. The real data considered were the daily closing values of the S&P500 index and the associated European call and put option prices in the year 2005. The method proposed here for calibrating the Black-Scholes dynamics model could be extended to other science and engineering models that may be expressed in terms of stochastic dynamical systems.

  20. A Spatial Lattice Model Applied for Meteorological Visualization and Analysis

    Directory of Open Access Journals (Sweden)

    Mingyue Lu

    2017-03-01

    Full Text Available Meteorological information has obvious spatial-temporal characteristics. Although it is meaningful to employ a geographic information system (GIS to visualize and analyze the meteorological information for better identification and forecasting of meteorological weather so as to reduce the meteorological disaster loss, modeling meteorological information based on a GIS is still difficult because meteorological elements generally have no stable shape or clear boundary. To date, there are still few GIS models that can satisfy the requirements of both meteorological visualization and analysis. In this article, a spatial lattice model based on sampling particles is proposed to support both the representation and analysis of meteorological information. In this model, a spatial sampling particle is regarded as the basic element that contains the meteorological information, and the location where the particle is placed with the time mark. The location information is generally represented using a point. As these points can be extended to a surface in two dimensions and a voxel in three dimensions, if these surfaces and voxels can occupy a certain space, then this space can be represented using these spatial sampling particles with their point locations and meteorological information. In this case, the full meteorological space can then be represented by arranging numerous particles with their point locations in a certain structure and resolution, i.e., the spatial lattice model, and extended at a higher resolution when necessary. For practical use, the meteorological space is logically classified into three types of spaces, namely the projection surface space, curved surface space, and stereoscopic space, and application-oriented spatial lattice models with different organization forms of spatial sampling particles are designed to support the representation, inquiry, and analysis of meteorological information within the three types of surfaces. Cases

  1. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...

  2. Applying Transtheoretical Model to Promote Physical Activities Among Women

    Science.gov (United States)

    Pirzadeh, Asiyeh; Mostafavi, Firoozeh; Ghofranipour, Fazllolah; Feizi, Awat

    2015-01-01

    Background: Physical activity is one of the most important indicators of health in communities but different studies conducted in the provinces of Iran showed that inactivity is prevalent, especially among women. Objectives: Inadequate regular physical activities among women, the importance of education in promoting the physical activities, and lack of studies on the women using transtheoretical model, persuaded us to conduct this study with the aim of determining the application of transtheoretical model in promoting the physical activities among women of Isfahan. Materials and Methods: This research was a quasi-experimental study which was conducted on 141 women residing in Isfahan, Iran. They were randomly divided into case and control groups. In addition to the demographic information, their physical activities and the constructs of the transtheoretical model (stages of change, processes of change, decisional balance, and self-efficacy) were measured at 3 time points; preintervention, 3 months, and 6 months after intervention. Finally, the obtained data were analyzed through t test and repeated measures ANOVA test using SPSS version 16. Results: The results showed that education based on the transtheoretical model significantly increased physical activities in 2 aspects of intensive physical activities and walking, in the case group over the time. Also, a high percentage of people have shown progress during the stages of change, the mean of the constructs of processes of change, as well as pros and cons. On the whole, a significant difference was observed over the time in the case group (P < 0.01). Conclusions: This study showed that interventions based on the transtheoretical model can promote the physical activity behavior among women. PMID:26834796

  3. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  4. Synthesis of semantic modelling and risk analysis methodology applied to animal welfare.

    Science.gov (United States)

    Bracke, M B M; Edwards, S A; Metz, J H M; Noordhuizen, J P T M; Algers, B

    2008-07-01

    Decision-making on animal welfare issues requires a synthesis of information. For the assessment of farm animal welfare based on scientific information collected in a database, a methodology called 'semantic modelling' has been developed. To date, however, this methodology has not been generally applied. Recently, a qualitative Risk Assessment approach has been published by the European Food Safety Authority (EFSA) for the first time, concerning the welfare of intensively reared calves. This paper reports on a critical analysis of this Risk Assessment (RA) approach from a semantic-modelling (SM) perspective, emphasizing the importance of several seemingly self-evident principles, including the definition of concepts, application of explicit methodological procedures and specification of how underlying values and scientific information lead to the RA output. In addition, the need to include positive aspects of welfare and overall welfare assessments are emphasized. The analysis shows that the RA approach for animal welfare could benefit from SM methodology to support transparent and science-based decision-making.

  5. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  6. Bayesian network modeling applied to coastal geomorphology: lessons learned from a decade of experimentation and application

    Science.gov (United States)

    Plant, N. G.; Thieler, E. R.; Gutierrez, B.; Lentz, E. E.; Zeigler, S. L.; Van Dongeren, A.; Fienen, M. N.

    2016-12-01

    We evaluate the strengths and weaknesses of Bayesian networks that have been used to address scientific and decision-support questions related to coastal geomorphology. We will provide an overview of coastal geomorphology research that has used Bayesian networks and describe what this approach can do and when it works (or fails to work). Over the past decade, Bayesian networks have been formulated to analyze the multi-variate structure and evolution of coastal morphology and associated human and ecological impacts. The approach relates observable system variables to each other by estimating discrete correlations. The resulting Bayesian-networks make predictions that propagate errors, conduct inference via Bayes rule, or both. In scientific applications, the model results are useful for hypothesis testing, using confidence estimates to gage the strength of tests while applications to coastal resource management are aimed at decision-support, where the probabilities of desired ecosystems outcomes are evaluated. The range of Bayesian-network applications to coastal morphology includes emulation of high-resolution wave transformation models to make oceanographic predictions, morphologic response to storms and/or sea-level rise, groundwater response to sea-level rise and morphologic variability, habitat suitability for endangered species, and assessment of monetary or human-life risk associated with storms. All of these examples are based on vast observational data sets, numerical model output, or both. We will discuss the progression of our experiments, which has included testing whether the Bayesian-network approach can be implemented and is appropriate for addressing basic and applied scientific problems and evaluating the hindcast and forecast skill of these implementations. We will present and discuss calibration/validation tests that are used to assess the robustness of Bayesian-network models and we will compare these results to tests of other models. This will

  7. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  8. The Langmuir isotherm: a commonly applied but misleading approach for the analysis of protein adsorption behavior.

    Science.gov (United States)

    Latour, Robert A

    2015-03-01

    The Langmuir adsorption isotherm provides one of the simplest and most direct methods to quantify an adsorption process. Because isotherm data from protein adsorption studies often appear to be fit well by the Langmuir isotherm model, estimates of protein binding affinity have often been made from its use despite that fact that none of the conditions required for a Langmuir adsorption process may be satisfied for this type of application. The physical events that cause protein adsorption isotherms to often provide a Langmuir-shaped isotherm can be explained as being due to changes in adsorption-induced spreading, reorientation, clustering, and aggregation of the protein on a surface as a function of solution concentration in contrast to being due to a dynamic equilibrium adsorption process, which is required for Langmuir adsorption. Unless the requirements of the Langmuir adsorption process can be confirmed, fitting of the Langmuir model to protein adsorption isotherm data to obtain thermodynamic properties, such as the equilibrium constant for adsorption and adsorption free energy, may provide erroneous values that have little to do with the actual protein adsorption process, and should be avoided. In this article, a detailed analysis of the Langmuir isotherm model is presented along with a quantitative analysis of the level of error that can arise in derived parameters when the Langmuir isotherm is inappropriately applied to characterize a protein adsorption process.

  9. Uncertainty Reduced Novelty Detection Approach Applied to Rotating Machinery for Condition Monitoring

    Directory of Open Access Journals (Sweden)

    S. Ma

    2015-01-01

    Full Text Available Novelty detection has been developed into a state-of-the-art technique to detect abnormal behavior and trigger alarm for in-field machine maintenance. With built-up models of normality, it has been widely applied to several situations with normal supervising dataset such as shaft rotating speed and component temperature available meanwhile in the absence of fault information. However, the research about vibration transmission based novelty detection remains unnoticed until recently. In this paper, vibration transmission measurement on rotor is performed; based on extreme value distributions, thresholds for novelty detection are calculated. In order to further decrease the false alarm rate, both measurement and segmentation uncertainty are considered, as they may affect threshold value and detection correctness heavily. Feasible reduction strategies are proposed and discussed. It is found that the associated multifractal coefficient and Kullback-Leibler Divergence operate well in the uncertainty reduction process. As shown by in situ applications to abnormal rotor with pedestal looseness, it is demonstrated that the abnormal states are detected. The higher specificity value proves the effectiveness of proposed uncertainty reduction method. This paper shows novel achievements of uncertainty reduced novelty detection applied to vibration signal in dynamical system and also sheds lights on its utilization in the field of health monitoring of rotating machinery.

  10. Applying the Extended Parallel Process Model to workplace safety messages.

    Science.gov (United States)

    Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M

    2013-01-01

    The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.

  11. Applying learning theories and instructional design models for effective instruction.

    Science.gov (United States)

    Khalil, Mohammed K; Elkhider, Ihsan A

    2016-06-01

    Faculty members in higher education are involved in many instructional design activities without formal training in learning theories and the science of instruction. Learning theories provide the foundation for the selection of instructional strategies and allow for reliable prediction of their effectiveness. To achieve effective learning outcomes, the science of instruction and instructional design models are used to guide the development of instructional design strategies that elicit appropriate cognitive processes. Here, the major learning theories are discussed and selected examples of instructional design models are explained. The main objective of this article is to present the science of learning and instruction as theoretical evidence for the design and delivery of instructional materials. In addition, this article provides a practical framework for implementing those theories in the classroom and laboratory. Copyright © 2016 The American Physiological Society.

  12. Applying Transtheoretical Model to Promote Physical Activities Among Women

    OpenAIRE

    2015-01-01

    Background: Physical activity is one of the most important indicators of health in communities but different studies conducted in the provinces of Iran showed that inactivity is prevalent, especially among women. Objectives: Inadequate regular physical activities among women, the importance of education in promoting the physical activities, and lack of studies on the women using transtheoretical model, persuaded us to conduct this study with the aim of determining the application of transtheo...

  13. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  14. Structure Modeling and Validation applied to Source Physics Experiments (SPEs)

    Science.gov (United States)

    Larmat, C. S.; Rowe, C. A.; Patton, H. J.

    2012-12-01

    The U. S. Department of Energy's Source Physics Experiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

  15. Modeling a Thermoelectric Generator Applied to Diesel Automotive Heat Recovery

    Science.gov (United States)

    Espinosa, N.; Lazard, M.; Aixala, L.; Scherrer, H.

    2010-09-01

    Thermoelectric generators (TEGs) are outstanding devices for automotive waste heat recovery. Their packaging, lack of moving parts, and direct heat to electrical conversion are the main benefits. Usually, TEGs are modeled with a constant hot-source temperature. However, energy in exhaust gases is limited, thus leading to a temperature decrease as heat is recovered. Therefore thermoelectric properties change along the TEG, affecting performance. A thermoelectric generator composed of Mg2Si/Zn4Sb3 for high temperatures followed by Bi2Te3 for low temperatures has been modeled using engineering equation solver (EES) software. The model uses the finite-difference method with a strip-fins convective heat transfer coefficient. It has been validated on a commercial module with well-known properties. The thermoelectric connection and the number of thermoelements have been addressed as well as the optimum proportion of high-temperature material for a given thermoelectric heat exchanger. TEG output power has been estimated for a typical commercial vehicle at 90°C coolant temperature.

  16. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  17. Experiences & Tools from Modeling Instruction Applied to Earth Sciences

    Science.gov (United States)

    Cervenec, J.; Landis, C. E.

    2012-12-01

    The Framework for K-12 Science Education calls for stronger curricular connections within the sciences, greater depth in understanding, and tasks higher on Bloom's Taxonomy. Understanding atmospheric sciences draws on core knowledge traditionally taught in physics, chemistry, and in some cases, biology. If this core knowledge is not conceptually sound, well retained, and transferable to new settings, understanding the causes and consequences of climate changes become a task in memorizing seemingly disparate facts to a student. Fortunately, experiences and conceptual tools have been developed and refined in the nationwide network of Physics Modeling and Chemistry Modeling teachers to build necessary understanding of conservation of mass, conservation of energy, particulate nature of matter, kinetic molecular theory, and particle model of light. Context-rich experiences are first introduced for students to construct an understanding of these principles and then conceptual tools are deployed for students to resolve misconceptions and deepen their understanding. Using these experiences and conceptual tools takes an investment of instructional time, teacher training, and in some cases, re-envisioning the format of a science classroom. There are few financial barriers to implementation and students gain a greater understanding of the nature of science by going through successive cycles of investigation and refinement of their thinking. This presentation shows how these experiences and tools could be used in an Earth Science course to support students developing conceptually rich understanding of the atmosphere and connections happening within.

  18. Anisotropic micro-sphere-based finite elasticity applied to blood vessel modelling

    Science.gov (United States)

    Alastrué, V.; Martínez, M. A.; Doblaré, M.; Menzel, A.

    2009-01-01

    A fully three-dimensional anisotropic elastic model for vascular tissue modelling is presented here. The underlying strain energy density function is assumed to additively decouple into volumetric and deviatoric contributions. A straightforward isotropic neo-Hooke-type law is used to model the deviatoric response of the ground substance, whereas a micro-structurally or rather micro-sphere-based approach will be employed to model the contribution and distribution of fibres within the biological tissue of interest. Anisotropy was introduced by means of the use of von Mises orientation distribution functions. Two different micro-mechanical approaches—a, say phenomenological, exponential ansatz, and a worm-like-chain-based formulation—are applied to the micro-fibres and illustratively compared. The passage from micro-structural contributions to the macroscopic response is obtained by a computational homogenisation scheme, namely numerical integration over the surface of the individual micro-spheres. The algorithmic treatment of this integration is discussed in detail for the anisotropic problem at hand, so that several cubatures of the micro-sphere are tested in order to optimise the accuracy at reasonable computational cost. Moreover, the introduced material parameters are identified from simple tension tests on human coronary arterial tissue for the two micro-mechanical models investigated. Both approaches are able to recapture the experimental data. Based on the identified sets of parameters, we first discuss a homogeneous deformation in simple shear to evaluate the models' response at the micro-structural level. Later on, an artery-like two-layered tube subjected to internal pressure is simulated by making use of a non-linear finite element setting. This enables to obtain the micro- and macroscopic responses in an inhomogeneous deformation problem, namely a blood vessel representative boundary value problem. The effect of residual stresses is additionally

  19. Applying different spatial distribution and modelling concepts in three nested mesoscale catchments of Germany

    Science.gov (United States)

    Bongartz, K.

    Distributed, physically based river basin models are receiving increasing importance in integrated water resources management (IWRM) in Germany and in Europe, especially after the release of the new European Water Framework Directive (WFD). Applications in mesoscale catchments require an appropriate approach to represent the spatial distribution of related catchment properties such as land use, soil physics and topography by utilizing techniques of remote sensing and GIS analyses. The challenge is to delineate scale independent homogeneous modelling entities which, on the one hand may represent the dynamics of the dominant hydrological processes and, on the other hand can be derived from spatially distributed physiographical catchment properties. This scaling problem is tackled in this regional modelling study by applying the concept of hydrological response units (HRUs). In a nested catchment approach three different modelling conceptualisations are used to describe the runoff processes: (i) the topographic stream-segment-based HRU delineation proposed by Leavesley et al. [Precipitation-Runoff-Modelling-System, User’s Manual, Water Resource Investigations Report 83-4238, US Geological Survey, 1983]; (ii) the process based physiographic HRU-concept introduced by Flügel [Hydrol. Process. 9 (1995) 423] and (iii) an advanced HRU-concept adapted from (ii), which included the topographic topology of HRU-areas and the river network developed by Staudenraush [Eco Regio 8 (2000) 121]. The influence of different boundary conditions associated with changing the landuse classes, the temporal data resolution and the landuse scenarios were investigated. The mesoscale catchment of the river Ilm ( A∼895 km 2) in Thuringia, Germany, and the Precipitation-Runoff-Modelling-System (PRMS) were selected for this study. Simulations show that the physiographic based concept is a reliable method for modelling basin dynamics in catchments up to 200 km 2 whereas in larger catchments

  20. Applying the transtheoretical model to health care proxy completion.

    Science.gov (United States)

    Finnell, Deborah S; Wu, Yow-Wu Bill; Jezewski, Mary Ann; Meeker, Mary Ann; Sessanna, Loralee; Lee, Jongwon

    2011-01-01

    For many, an important health decision is whether or not to document end-of-life wishes using an advance directive (e.g., health care proxy). To date, interventions targeting this health behavior have had little effect on increasing advance directive completion rates. Health behavior models, such as the transtheoretical model (TTM) could be useful for understanding the health decision-making processes used along a continuum, from no intention to complete an advance directive to completing one and discussing it with an appointed advocate. To explore the applicability of the TTM for a previously understudied health behavior-completing a health care proxy (HCP). Four established TTM measures for completing a HCP (stages of change, processes of change, decisional balance, and self-efficacy) were administered to 566 adults with coverage from 1 of 2 health insurance companies. Separate analyses of variance were used to test the relationships between the independent variable (stages of change) and dependent variables (processes of change, decisional balance, self-efficacy scores). Consistent with other TTM research both the experiential and the behavioral processes of change revealed the lowest scores in the precontemplation stage peaking in the preparation stage. The pattern of pros and cons was replicated from previous TTM studies, with the 2 scores crossing over just prior to the preparation stage. Self-efficacy scores incrementally increased across the stages of change with the largest effect evident from the precontemplation to preparation stage. The models developed from this study can be used to guide the development of stage-based interventions for promoting health care proxy completion.

  1. Modelling Coagulation Systems: A Stochastic Approach

    CERN Document Server

    Ryazanov, V V

    2011-01-01

    A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.

  2. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  3. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model.

    Science.gov (United States)

    Zuniga-Teran, Adriana A; Orr, Barron J; Gimblett, Randy H; Chalfoun, Nader V; Guertin, David P; Marsh, Stuart E

    2017-01-13

    Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire (n = 486) distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation) representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  4. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-01-01

    Full Text Available Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire (n = 486 distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  5. ORGANIZING SCENARIO VARIABLES BY APPLYING THE INTERPRETATIVE STRUCTURAL MODELING (ISM

    Directory of Open Access Journals (Sweden)

    Daniel Estima de Carvalho

    2009-10-01

    Full Text Available The scenario building method is a thought mode - taken to effect in an optimized, strategic manner - based on trends and uncertain events, concerning a large variety of potential results that may impact the future of an organization.In this study, the objective is to contribute towards a possible improvement in Godet and Schoemaker´s scenario preparation methods, by employing the Interpretative Structural Modeling (ISM as a tool for the analysis of variables.Given this is an exploratory theme, bibliographical research with tool definition and analysis, examples extraction from literature and a comparison exercise of referred methods, were undertaken.It was verified that ISM may substitute or complement the original tools for the analysis of variables of scenarios per Godet and Schoemaker’s methods, given the fact that it enables an in-depth analysis of relations between variables in a shorter period of time, facilitating both structuring and construction of possible scenarios.Key-words: Strategy. Future studies. Interpretative Structural Modeling.

  6. Towards a Multiscale Approach to Cybersecurity Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.

  7. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    Science.gov (United States)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  8. Structured assessment approach: Version I. Applied demonstration of output results. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Parziale, A.A.; Sacks, 1.J.

    1979-10-01

    A methodology, the Structured Assessment Approach, has been developed for the assessment of the effectiveness of material control and accounting (MC and A) safeguards systems at nuclear fuel cycle facilities. This methodology has been refined into a computational tool, the SAA Version 1 computational package, that was used first to analyze a hypothetical fuel cycle facility (HFCF) and used more recently to assess operational nuclear plants. The Version 1 analysis package is designed to analyze safeguards systems that prevent the diversion of special nuclear material (SNM) from nuclear fuel cycle facilities and to provide assurance that diversion has not occurred. This report is the third volume, Applied Demonstration of Output Results, of a four-volume document. It presents the outputs for each of the four levels of the SAA Version 1 computational package. Two types of outputs are discussed: detailed output findings and summary output tables. The summary output tables are used to aggregate the detailed output findings in a condensed form for NRC analyst consumption. Specific output results are presented for an HFCF, which is described in Volume II.

  9. An interdisciplinary and experimental approach applied to an analysis of the communication of influence

    Directory of Open Access Journals (Sweden)

    Brigitte JUANALS

    2013-07-01

    Full Text Available This paper describes the added value of an interdisciplinary and experimental approach applied to an analysis of the inter-organizational communication of influence. The field analyzed is the international industrial standardization of societal security. A communicational problem has been investigated with an experimental method based on natural language processing and knowledge management tools. The purpose of the methodological framework is to clarify the way international standards are designed and the policies that are supported by these standards. Furthermore, strategies of influence of public and private stakeholders involved in the NGOs which produce these texts have also been studied. The means of inter-organizational communication between organizations (companies or governmental authorities and NGOs can be compared to the lobbying developed in the context of the construction of Europe and globalization. Understanding the prescriptive process has become a crucial issue for States, organizations and citizens. This research contributes to the critical assessment of the new industrial policies currently being developed from the point of view of their characteristics and the way they have been designed.

  10. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    Science.gov (United States)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  11. Applying patient centered approach in management of pulmonary tuberculosis: A case report from Malaysia.

    Science.gov (United States)

    Atif, M; Sulaiman, Sas; Shafi, Aa; Muttalif, Ar; Ali, I; Saleem, F

    2011-06-01

    A 24 year university student with history of productive cough was registered as sputum smear confirmed case of pulmonary tuberculosis. During treatment, patient suffered from itchiness associated with anti tuberculosis drugs and was treated with chlorpheniramine (4mg) tablet. Patient missed twenty eight doses of anti tuberculosis drugs in continuation phase claiming that he was very busy in his studies and assignments. Upon questioning he further explained that he was quite healthy after five months and unable to concentrate on his studies after taking prescribed medicines. His treatment was stopped based on clinical improvement, although he did not complete six months therapy. Two major reasons; false perception of being completely cured and side effects associated with anti TB drugs might be responsible for non adherence. Non sedative anti histamines like fexofenadine, citrizine or loratidine should be preferred over first generation anti histamines (chlorpheniramine) in patients with such lifestyle. Patient had not completed full course of chemotherapy, which is preliminary requirement for a case to be classified as "cure" and "treatment completed". Moreover, patient had not defaulted for two consecutive months. Therefore, according to WHO treatment outcome categories, this patient can neither be classified as "cure" or "treatment completed" nor as "defaulter". Further elaboration of WHO treatment outcome categories is required for adequate classification of patients with similar characteristics. Likelihood of non adherence can be significantly reduced by applying the WHO recommended "Patient Centered Approach" strategy. Close friend, class mate or family member can be selected as treatment supporter to ensure adherence to treatment.

  12. Extraction of thermal Green's function using diffuse fields: a passive approach applied to thermography

    Science.gov (United States)

    Capriotti, Margherita; Sternini, Simone; Lanza di Scalea, Francesco; Mariani, Stefano

    2016-04-01

    In the field of non-destructive evaluation, defect detection and visualization can be performed exploiting different techniques relying either on an active or a passive approach. In the following paper the passive technique is investigated due to its numerous advantages and its application to thermography is explored. In previous works, it has been shown that it is possible to reconstruct the Green's function between any pair of points of a sensing grid by using noise originated from diffuse fields in acoustic environments. The extraction of the Green's function can be achieved by cross-correlating these random recorded waves. Averaging, filtering and length of the measured signals play an important role in this process. This concept is here applied in an NDE perspective utilizing thermal fluctuations present on structural materials. Temperature variations interacting with thermal properties of the specimen allow for the characterization of the material and its health condition. The exploitation of the thermographic image resolution as a dense grid of sensors constitutes the basic idea underlying passive thermography. Particular attention will be placed on the creation of a proper diffuse thermal field, studying the number, placement and excitation signal of heat sources. Results from numerical simulations will be presented to assess the capabilities and performances of the passive thermal technique devoted to defect detection and imaging of structural components.

  13. Old concepts, new molecules and current approaches applied to the bacterial nucleotide signalling field

    Science.gov (United States)

    2016-01-01

    Signalling nucleotides are key molecules that help bacteria to rapidly coordinate cellular pathways and adapt to changes in their environment. During the past 10 years, the nucleotide signalling field has seen much excitement, as several new signalling nucleotides have been discovered in both eukaryotic and bacterial cells. The fields have since advanced quickly, aided by the development of important tools such as the synthesis of modified nucleotides, which, combined with sensitive mass spectrometry methods, allowed for the rapid identification of specific receptor proteins along with other novel genome-wide screening methods. In this review, we describe the principle concepts of nucleotide signalling networks and summarize the recent work that led to the discovery of the novel signalling nucleotides. We also highlight current approaches applied to the research in the field as well as resources and methodological advances aiding in a rapid identification of nucleotide-specific receptor proteins. This article is part of the themed issue ‘The new bacteriology’. PMID:27672152

  14. Post-16 Biology--Some Model Approaches?

    Science.gov (United States)

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  15. Sensorless position estimator applied to nonlinear IPMC model

    Science.gov (United States)

    Bernat, Jakub; Kolota, Jakub

    2016-11-01

    This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.

  16. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  17. Applying direct observation to model workflow and assess adoption.

    Science.gov (United States)

    Unertl, Kim M; Weinger, Matthew B; Johnson, Kevin B

    2006-01-01

    Lack of understanding about workflow can impair health IT system adoption. Observational techniques can provide valuable information about clinical workflow. A pilot study using direct observation was conducted in an outpatient chronic disease clinic. The goals of the study were to assess workflow and information flow and to develop a general model of workflow and information behavior. Over 55 hours of direct observation showed that the pilot site utilized many of the features of the informatics systems available to them, but also employed multiple non-electronic artifacts and workarounds. Gaps existed between clinic workflow and informatics tool workflow, as well as between institutional expectations of informatics tool use and actual use. Concurrent use of both paper-based and electronic systems resulted in duplication of effort and inefficiencies. A relatively short period of direct observation revealed important information about workflow and informatics tool adoption.

  18. "Let's Move" campaign: applying the extended parallel process model.

    Science.gov (United States)

    Batchelder, Alicia; Matusitz, Jonathan

    2014-01-01

    This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."

  19. On the combined gravity gradient modeling for applied geophysics

    CERN Document Server

    Veryaskin, Alexey

    2007-01-01

    Gravity gradiometry research and development has intensified in recent years to the extent that technologies providing a resolution of about 1 Eotvos per 1 sec average shall likely soon be available for multiple critical applications such as natural resources exploration, oil reservoir monitoring and defence establishment. Much of the content of this paper was composed a decade ago, and only minor modifications were required for the conclusions to be just as applicable today. In this paper we demonstrate how gravity gradient data can be modeled, and show some examples of how gravity gradient data can be combined in order to extract valuable information. In particular, this study demonstrates the importance of two gravity gradient components, Txz and Tyz which, when processed together, can provide more information on subsurface density contrasts than that derived solely from the vertical gravity gradient (Tzz).

  20. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    Science.gov (United States)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  1. [The bioethical principlism model applied in pain management].

    Science.gov (United States)

    Souza, Layz Alves Ferreira; Pessoa, Ana Paula da Costa; Barbosa, Maria Alves; Pereira, Lilian Varanda

    2013-03-01

    An integrative literature review was developed with the purpose to analyze the scientific production regarding the relationships between pain and the principles of bioethics (autonomy, beneficence, nonmaleficence and justice). Controlled descriptors were used in three international data sources (LILACS, SciELO, MEDLINE), in April of 2012, totaling 14 publications categorized by pain and autonomy, pain and beneficence, pain and nonmaleficence, pain and justice. The adequate relief of pain is a human right and a moral issue directly related with the bioethical principlism standard model (beneficence, non-maleficence, autonomy and justice). However, many professionals overlook the pain of their patients, ignoring their ethical role when facing suffering. It was concluded that principlism has been neglected in the care of patients in pain, showing the need for new practices to change this setting.

  2. Virtual building environments (VBE) - Applying information modeling to buildings

    Energy Technology Data Exchange (ETDEWEB)

    Bazjanac, Vladimir

    2004-06-21

    A Virtual Building Environment (VBE) is a ''place'' where building industry project staffs can get help in creating Building Information Models (BIM) and in the use of virtual buildings. It consists of a group of industry software that is operated by industry experts who are also experts in the use of that software. The purpose of a VBE is to facilitate expert use of appropriate software applications in conjunction with each other to efficiently support multidisciplinary work. This paper defines BIM and virtual buildings, and describes VBE objectives, set-up and characteristics of operation. It informs about the VBE Initiative and the benefits from a couple of early VBE projects.

  3. Nonspherical Radiation Driven Wind Models Applied to Be Stars

    Science.gov (United States)

    Arauxo, F. X.

    1990-11-01

    ABSTRACT. In this work we present a model for the structure of a radiatively driven wind in the meridional plane of a hot star. Rotation effects and simulation of viscous forces were included in the motion equations. The line radiation force is considered with the inclusion of the finite disk correction in self-consistent computations which also contain gravity darkening as well as distortion of the star by rotation. An application to a typical BlV star leads to mass-flux ratios between equator and pole of the order of 10 and mass loss rates in the range 5.l0 to Mo/yr. Our envelope models are flattened towards the equator and the wind terminal velocities in that region are rather high (1000 Km/s). However, in the region near the star the equatorial velocity field is dominated by rotation. RESUMEN. Se presenta un modelo de la estructura de un viento empujado radiativamente en el plano meridional de una estrella caliente. Se incluyeron en las ecuaciones de movimiento los efectos de rotaci6n y la simulaci6n de fuerzas viscosas. Se consider6 la fuerza de las lineas de radiaci6n incluyendo la correcci6n de disco finito en calculos autoconsistentes los cuales incluyen oscurecimiento gravitacional asi como distorsi6n de la estrella por rotaci6n. La aplicaci6n a una estrella tipica BlV lleva a cocientes de flujo de masa entre el ecuador y el polo del orden de 10 de perdida de masa en el intervalo 5.l0 a 10 Mo/ano. Nuestros modelos de envolvente estan achatados hacia el ecuador y las velocidads terminales del viento en esa regi6n son bastante altas (1000 Km/s). Sin embargo, en la regi6n cercana a la estrella el campo de velocidad ecuatorial esta dominado por la rotaci6n. Key words: STARS-BE -- STARS-WINDS

  4. Decomposition approach to model smart suspension struts

    Science.gov (United States)

    Song, Xubin

    2008-10-01

    Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.

  5. Kinetics-based phase change approach for VOF method applied to boiling flow

    Science.gov (United States)

    Cifani, Paolo; Geurts, Bernard; Kuerten, Hans

    2014-11-01

    Direct numerical simulations of boiling flows are performed to better understand the interaction of boiling phenomena with turbulence. The multiphase flow is simulated by solving a single set of equations for the whole flow field according to the one-fluid formulation, using a VOF interface capturing method. Interface terms, related to surface tension, interphase mass transfer and latent heat, are added at the phase boundary. The mass transfer rate across the interface is derived from kinetic theory and subsequently coupled with the continuum representation of the flow field. The numerical model was implemented in OpenFOAM and validated against 3 cases: evaporation of a spherical uniformly heated droplet, growth of a spherical bubble in a superheated liquid and two dimensional film boiling. The computational model will be used to investigate the change in turbulence intensity in a fully developed channel flow due to interaction with boiling heat and mass transfer. In particular, we will focus on the influence of the vapor bubble volume fraction on enhancing heat and mass transfer. Furthermore, we will investigate kinetic energy spectra in order to identify the dynamics associated with the wakes of vapor bubbles. Department of Applied Mathematics, 7500 AE Enschede, NL.

  6. Applying the chicken embryo chorioallantoic membrane assay to study treatment approaches in urothelial carcinoma.

    Science.gov (United States)

    Skowron, Margaretha A; Sathe, Anuja; Romano, Andrea; Hoffmann, Michèle J; Schulz, Wolfgang A; van Koeveringe, Gommert A; Albers, Peter; Nawroth, Roman; Niegisch, Günter

    2017-09-01

    Rapid development of novel treatment options demands valid preclinical screening models for urothelial carcinoma (UC). The translational value of high-throughput drug testing using 2-dimensional (2D) cultures is limited while for xenograft models handling efforts and costs often become prohibitive for larger-scale drug testing. Therefore, we investigated to which extent the chicken chorioallantoic membrane (CAM) assay might provide an alternative model to study antineoplastic treatment approaches for UC. The ability of 8 human UC cell lines (UCCs) to form tumors after implantation on CAMs was investigated. Epithelial-like RT-112 and mesenchymal-like T-24 UCCs in cell culture or as CAM tumors were treated with cisplatin alone or combined with histone deacetylase inhibitors (HDACi) romidepsin and suberanilohydroxamic acid. Tumor weight, size, and bioluminescence activity were monitored; tumor specimens were analyzed by histology and immunohistochemistry. Western blotting and quantitative real time polymerase chain reaction were used to measure protein and mRNA expression. UCCs were reliably implantable on the CAM, but tumor development varied among cell lines. Expression of differentiation markers (E-cadherin, vimentin, CK5, CK18, and CK20) was similar in CAM tumors and 2D cultures. Cellular phenotypes also remained stable after recultivation of CAM tumors in 2D cultures. Bioluminescence images correlated with tumor weight. Cisplatin and HDACi decreased weight and growth of CAM tumors in a dose-dependent manner, but HDACi treatment acted less efficiently as in 2D cultures, especially on its typically associated molecular markers. Synergistic effects of HDACi and subsequent cisplatin treatment on UCCs were neither detected in 2D cultures nor detected in CAM tumors. Our results demonstrate that the CAM assay is a useful tool for studying tumor growth and response to conventional anticancer drugs under 3D conditions, especially cytotoxic drugs as cisplatin. With some

  7. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect and design for learning

    Directory of Open Access Journals (Sweden)

    Evangelia Triantafyllou

    2016-05-01

    Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance

  8. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect and design for learning

    Directory of Open Access Journals (Sweden)

    Evangelia Triantafyllou

    2016-05-01

    Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance

  9. Applying revised gap analysis model in measuring hotel service quality.

    Science.gov (United States)

    Lee, Yu-Cheng; Wang, Yu-Che; Chien, Chih-Hung; Wu, Chia-Huei; Lu, Shu-Chiung; Tsai, Sang-Bing; Dong, Weiwei

    2016-01-01

    With the number of tourists coming to Taiwan growing by 10-20 % since 2010, the number has increased due to an increasing number of foreign tourists, particularly after deregulation allowed admitting tourist groups, followed later on by foreign individual tourists, from mainland China. The purpose of this study is to propose a revised gap model to evaluate and improve service quality in Taiwanese hotel industry. Thus, service quality could be clearly measured through gap analysis, which was more effective for offering direction in developing and improving service quality. The HOLSERV instrument was used to identify and analyze service gaps from the perceptions of internal and external customers. The sample for this study included three main categories of respondents: tourists, employees, and managers. The results show that five gaps influenced tourists' evaluations of service quality. In particular, the study revealed that Gap 1 (management perceptions vs. customer expectations) and Gap 9 (service provider perceptions of management perceptions vs. service delivery) were more critical than the others in affecting perceived service quality, making service delivery the main area of improvement. This study contributes toward an evaluation of the service quality of the Taiwanese hotel industry from the perspectives of customers, service providers, and managers, which is considerably valuable for hotel managers. It was the aim of this study to explore all of these together in order to better understand the possible gaps in the hotel industry in Taiwan.

  10. Applying dispersive changes to Lagrangian particles in groundwater transport models

    Science.gov (United States)

    Konikow, Leonard F.

    2010-01-01

    Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative.

  11. A spectrophotometric model applied to cluster galaxies: the WINGS dataset

    CERN Document Server

    Fritz, J; Bettoni, D; Cava, A; Couch, W J; D'Onofrio, M; Dressler, A; Fasano, G; Kjaergaard, P; Moles, M; Varela, J

    2007-01-01

    [Abridged] The WIde-field Nearby Galaxy-cluster Survey (WINGS) is a project aiming at the study of the galaxy populations in clusters in the local universe (0.04model is the possibility of treating dust extinction as a function of age, allowing younger stars to be more obscured than older ones. Our technique, for the first time, takes into account this feature in a spectral fitting code. A set of template spectra spanning a wide range of star formation histories is built, with features closely resembling those of typical spectra in our sample in terms of spectral resolution, noise and wavelength coverage. Our method of analyzing these spectra allows us to test the reliability and the uncertainties related to each physical parameter we are inferring. The well-known degeneracy problem, i.e. the non-uniqu...

  12. Method for evaluating prediction models that apply the results of randomized trials to individual patients

    Directory of Open Access Journals (Sweden)

    Kattan Michael W

    2007-06-01

    Full Text Available Abstract Introduction The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results. Methods We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy. Results For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of

  13. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  14. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  15. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....

  16. Laser modeling a numerical approach with algebra and calculus

    CERN Document Server

    Csele, Mark Steven

    2014-01-01

    Offering a fresh take on laser engineering, Laser Modeling: A Numerical Approach with Algebra and Calculus presents algebraic models and traditional calculus-based methods in tandem to make concepts easier to digest and apply in the real world. Each technique is introduced alongside a practical, solved example based on a commercial laser. Assuming some knowledge of the nature of light, emission of radiation, and basic atomic physics, the text:Explains how to formulate an accurate gain threshold equation as well as determine small-signal gainDiscusses gain saturation and introduces a novel pass

  17. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  18. Self-Efficacy, Planning, and Drink Driving: Applying the Health Action Process Approach.

    Science.gov (United States)

    Wilson, Hollie; Sheehan, Mary; Palk, Gavan; Watson, Angela

    2016-05-19

    This study examines the constructs from the health action process approach (HAPA) theoretical model (Schwarzer, 1992) on future drink driving avoidance by first time drink driving offenders. This research presents an advance in health related theory by the novel application of the health model to predict risk avoidance. Baseline interviews were conducted with 198 first time drink driving offenders at the time of court appearance, and offenders were followed up 6-8 months following the offense date. The key outcome variables used in 3 stages were behavioral expectation, planning, and self-reported avoidance of drink driving at follow-up. Bivariate and multivariate analyses were conducted for each stage. High task self-efficacy and female gender were significantly related to having no behavioral expectation of future drink driving. High maintenance self-efficacy was significantly related to high levels of planning to avoid future drink driving. Those with higher planning scores at baseline had significantly higher odds of reporting that they had avoided drink driving at follow up. Planning plays an important role in drink driving rehabilitation and should be a focus of early intervention programs aimed at reducing drink driving recidivism following a first offense. Self-efficacy is an important construct to consider for the behavior and could strengthen a planning focused intervention. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Maturity Models 101: A Primer for Applying Maturity Models to Smart Grid Security, Resilience, and Interoperability

    Science.gov (United States)

    2012-11-01

    needed to meet challenge problems. These models have been sponsored by governments, individual organizations, and consortia (including industry -specific...power industry , leading to the introduction of many new systems, business processes, markets, and enterprise integration approaches. How do you manage the...in technology and its application in the electric power industry , leading to the introduction of many new systems, business processes, markets, and

  20. A Bayesian Shrinkage Approach for AMMI Models.

    Science.gov (United States)

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior