WorldWideScience

Sample records for modeling approach based

  1. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  2. Model based feature fusion approach

    NARCIS (Netherlands)

    Schwering, P.B.W.

    2001-01-01

    In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si

  3. Model Mapping Approach Based on Ontology Semantics

    Directory of Open Access Journals (Sweden)

    Jinkui Hou

    2013-09-01

    Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development

  4. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  5. Evaluating face trustworthiness: a model based approach.

    Science.gov (United States)

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  6. A model-based multisensor data fusion knowledge management approach

    Science.gov (United States)

    Straub, Jeremy

    2014-06-01

    A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.

  7. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  8. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...

  9. A New Detection Approach Based on the Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua

    2006-01-01

    The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.

  10. MDA based-approach for UML Models Complete Comparison

    CERN Document Server

    Chaouni, Samia Benabdellah; Mouline, Salma

    2011-01-01

    If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.

  11. A Comparison of Filter-based Approaches for Model-based Prognostics

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is...

  12. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  13. A discrete Lagrangian based direct approach to macroscopic modelling

    Science.gov (United States)

    Sarkar, Saikat; Nowruzpour, Mohsen; Reddy, J. N.; Srinivasa, A. R.

    2017-01-01

    A direct discrete Lagrangian based approach, designed at a length scale of interest, to characterize the response of a body is proposed. The main idea is to understand the dynamics of a deformable body via a Lagrangian corresponding to a coupled interaction of rigid particles in the reduced dimension. We argue that the usual practice of describing the laws of a deformable body in the continuum limit is redundant, because for most of the practical problems, analytical solutions are not available. Since continuum limit is not taken, the framework automatically relaxes the requirement of differentiability of field variables. The discrete Lagrangian based approach is illustrated by deriving an equivalent of the Euler-Bernoulli beam model. A few test examples are solved, which demonstrate that the derived non-local model predicts lower deflections in comparison to classical Euler-Bernoulli beam solutions. We have also included crack propagation in thin structures for isotropic and anisotropic cases using the Lagrangian based approach.

  14. Building enterprise reuse program--A model-based approach

    Institute of Scientific and Technical Information of China (English)

    梅宏; 杨芙清

    2002-01-01

    Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.

  15. Non-frontal model based approach to forensic face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie

  16. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  17. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  18. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Directory of Open Access Journals (Sweden)

    Matthew J. Daigle

    2011-01-01

    Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  19. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Science.gov (United States)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  20. A model-based approach to human identification using ECG

    Science.gov (United States)

    Homer, Mark; Irvine, John M.; Wendelken, Suzanne

    2009-05-01

    Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Current techniques for identification using ECG rely on empirical methods for extracting features from the ECG signal. This paper presents an alternative approach based on a time-domain model of the ECG trace. Because Auto-Regressive Integrated Moving Average (ARIMA) models form a rich class of descriptors for representing the structure of periodic time series data, they are well-suited to characterizing the ECG signal. We present a method for modeling the ECG, extracting features from the model representation, and identifying individuals using these features.

  1. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  2. Modeling the crop transpiration using an optimality-based approach

    Institute of Scientific and Technical Information of China (English)

    Stanislaus; J.Schymanski; Murugesu; Sivapalan

    2008-01-01

    Evapotranspiration constitutes more than 80% of the long-term water balance in Northern China.In this area,crop transpiration due to large areas of agriculture and irrigation is responsible for the majority of evapotranspiration.A model for crop transpiration is therefore essential for estimating the agricultural water consumption and understanding its feedback to the environment.However,most existing hydrological models usually calculate transpiration by relying on parameter calibration against local observations,and do not take into account crop feedback to the ambient environment.This study presents an optimality-based ecohydrology model that couples an ecological hypothesis,the photosynthetic process,stomatal movement,water balance,root water uptake and crop senescence,with the aim of predicting crop characteristics,CO2 assimilation and water balance based only on given meteorological data.Field experiments were conducted in the Weishan Irrigation District of Northern China to evaluate performance of the model.Agreement between simulation and measurement was achieved for CO2 assimilation,evapotranspiration and soil moisture content.The vegetation optimality was proven valid for crops and the model was applicable for both C3 and C4 plants.Due to the simple scheme of the optimality-based approach as well as its capability for modeling dynamic interactions between crops and the water cycle without prior vegetation information,this methodology is potentially useful to couple with the distributed hydrological model for application at the watershed scale.

  3. A Nonhydrostatic Model Based On A New Approach

    Science.gov (United States)

    Janjic, Z. I.

    Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical

  4. Conceptual modelling approach of mechanical products based on functional surface

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A modelling framework based on functional surface is presented to support conceptual design of mechanical products. The framework organizes product information in an abstract and multilevel manner. It consists of two mapping processes: function decomposition process and form reconstitution process. The steady mapping relationship from function to form (function-functional surface-form) is realized by taking functional surface as the middle layer. It farthest reduces the possibilities of combinatorial explosion that can occur during function decomposition and form reconstitution. Finally, CAD tools are developed and an auto-bender machine is applied to demonstrate the proposed approach.

  5. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  6. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  7. Lithium battery aging model based on Dakin's degradation approach

    Science.gov (United States)

    Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel

    2016-09-01

    This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.

  8. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model....... Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combining atoms to form molecules with desired properties. The concept of the phenomena-based synthesis/design method...

  9. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  10. An approach to model based testing of multiagent systems.

    Science.gov (United States)

    Ur Rehman, Shafiq; Nadeem, Aamer

    2015-01-01

    Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  11. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  12. Structure-based molecular modeling approaches to GPCR oligomerization.

    Science.gov (United States)

    Kaczor, Agnieszka A; Selent, Jana; Poso, Antti

    2013-01-01

    Classical structure-based drug design techniques using G-protein-coupled receptors (GPCRs) as targets focus nearly exclusively on binding at the orthosteric site of a single receptor. Dimerization and oligomerization of GPCRs, proposed almost 30 years ago, have, however, crucial relevance for drug design. Targeting these complexes selectively or designing small molecules that affect receptor-receptor interactions might provide new opportunities for novel drug discovery. In order to study the mechanisms and dynamics that rule GPCRs oligomerization, it is essential to understand the dynamic process of receptor-receptor association and to identify regions that are suitable for selective drug binding, which may be determined with experimental methods such as Förster resonance energy transfer (FRET) or Bioluminescence resonance energy transfer (BRET) and computational sequence- and structure-based approaches. The aim of this chapter is to provide a comprehensive description of the structure-based molecular modeling methods for studying GPCR dimerization, that is, protein-protein docking, molecular dynamics, normal mode analysis, and electrostatics studies.

  13. A mechanism-based approach to modeling ductile fracture.

    Energy Technology Data Exchange (ETDEWEB)

    Bammann, Douglas J.; Hammi, Youssef; Antoun, Bonnie R.; Klein, Patrick A.; Foulk, James W., III; McFadden, Sam X.

    2004-01-01

    Ductile fracture in metals has been observed to result from the nucleation, growth, and coalescence of voids. The evolution of this damage is inherently history dependent, affected by how time-varying stresses drive the formation of defect structures in the material. At some critically damaged state, the softening response of the material leads to strain localization across a surface that, under continued loading, becomes the faces of a crack in the material. Modeling localization of strain requires introduction of a length scale to make the energy dissipated in the localized zone well-defined. In this work, a cohesive zone approach is used to describe the post-bifurcation evolution of material within the localized zone. The relations are developed within a thermodynamically consistent framework that incorporates temperature and rate-dependent evolution relationships motivated by dislocation mechanics. As such, we do not prescribe the evolution of tractions with opening displacements across the localized zone a priori. The evolution of tractions is itself an outcome of the solution of particular, initial boundary value problems. The stress and internal state of the material at the point of bifurcation provides the initial conditions for the subsequent evolution of the cohesive zone. The models we develop are motivated by in-situ scanning electron microscopy of three-point bending experiments using 6061-T6 aluminum and 304L stainless steel, The in situ observations of the initiation and evolution of fracture zones reveal the scale over which the failure mechanisms act. In addition, these observations are essential for motivating the micromechanically-based models of the decohesion process that incorporate the effects of loading mode mixity, temperature, and loading rate. The response of these new cohesive zone relations is demonstrated by modeling the three-point bending configuration used for the experiments. In addition, we survey other methods with the potential

  14. A cost minimisation analysis in teledermatology: model-based approach

    Directory of Open Access Journals (Sweden)

    Eminović Nina

    2010-08-01

    Full Text Available Abstract Background Although store-and-forward teledermatology is increasingly becoming popular, evidence on its effects on efficiency and costs is lacking. The aim of this study, performed in addition to a clustered randomised trial, was to investigate to what extent and under which conditions store-and-forward teledermatology can reduce costs from a societal perspective. Methods A cost minimisation study design (a model based approach was applied to compare teledermatology and conventional process costs per dermatology patient care episode. Regarding the societal perspective, total mean costs of investment, general practitioner, dermatologists, out-of-pocket expenses and employer costs were calculated. Uncertainty analysis was performed using Monte Carlo simulation with 31 distributions in the used cost model. Scenario analysis was performed using one-way and two-way sensitivity analyses with the following variables: the patient travel distance to physician and dermatologist, the duration of teleconsultation activities, and the proportion of preventable consultations. Results Total mean costs of teledermatology process were €387 (95%CI, 281 to 502.5, while the total mean costs of conventional process costs were €354.0 (95%CI, 228.0 to 484.0. The total mean difference between the processes was €32.5 (95%CI, -29.0 to 74.7. Savings by teledermatology can be achieved if the distance to a dermatologist is larger (> = 75 km or when more consultations (> = 37% can be prevented due to teledermatology. Conclusions Teledermatology, when applied to all dermatology referrals, has a probability of 0.11 of being cost saving to society. In order to achieve cost savings by teledermatology, teledermatology should be applied in only those cases with a reasonable probability that a live consultation can be prevented. Trail Registration This study is performed partially based on PERFECT D Trial (Current Controlled Trials No.ISRCTN57478950.

  15. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  16. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  17. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...

  18. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Level...

  19. Teaching EFL Writing: An Approach Based on the Learner's Context Model

    Science.gov (United States)

    Lin, Zheng

    2017-01-01

    This study aims to examine qualitatively a new approach to teaching English as a foreign language (EFL) writing based on the learner's context model. It investigates the context model-based approach in class and identifies key characteristics of the approach delivered through a four-phase teaching and learning cycle. The model collects research…

  20. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  1. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  2. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  3. An Agent-Based Approach to Modeling Online Social Influence

    NARCIS (Netherlands)

    Maanen, P.P. van; Vecht, B. van der

    2013-01-01

    The aim of this study is to better understand social influence in online social media. Therefore, we propose a method in which we implement, validate and improve an individual behavior model. The behavior model is based on three fundamental behavioral principles of social influence from the literatu

  4. An Agent-Based Approach to Modeling Online Social Influence

    NARCIS (Netherlands)

    Maanen, P.P. van; Vecht, B. van der

    2013-01-01

    The aim of this study is to better understand social influence in online social media. Therefore, we propose a method in which we implement, validate and improve an individual behavior model. The behavior model is based on three fundamental behavioral principles of social influence from the literatu

  5. Crosscumulants Based Approaches for the Structure Identification of Volterra Models

    Institute of Scientific and Technical Information of China (English)

    Houda Mathlouthi; Kamel Abederrahim; Faouzi Msahli; Gerard Favier

    2009-01-01

    In this paper, we address the problem of structure identification of Volterra models. It consists in estimating the model order and the memory lcngth of each kernel. Two methods based on input-output crosscumulants arc developed. The first one uses zero mean independent and identically distributed Ganssian input, and the second one concerns a symmetric input sequence. Simulations are performed on six models having different orders and kernel memory lengths to demonstrate the advantages of the proposed methods.

  6. An Extended Enterprise Modeling Approach to Enterprise-based Integration

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The paradigm of extended enterprise is the core competency focused. An extended enterprise expands its scope from bounding a single enterprise to including additional processes performed by other enterprises. The integration of processes is enterprise based. This paper proposes a recursive enterprises interconnected chain model for the extended enterprise, and presents an enterprise-based integration framework for the extended enterprise. The case study is based on a motorcycle group corporation.

  7. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  8. A relaxation-based approach to damage modeling

    Science.gov (United States)

    Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus

    2017-01-01

    Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.

  9. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  10. Modeling of Agile Manufacturing Execution Systems with an Agent-based Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Agile manufacturing execution systems (AMES) are used to help manufacturers optimize shop floor production in an agile way. And the modeling of AMES is the key issue of realizing AMES. This paper presents an agent-based approach to AMES modeling. Firstly, the characteristics of AMES and its requirements on modeling are discussed. Secondly, a comparative analysis of modeling methods is carried out, and AMES modeling using an agent-based approach is put forward. Agent-based modeling method not only inherit ...

  11. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    Science.gov (United States)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  12. Professional Learning: A Fuzzy Logic-Based Modelling Approach

    Science.gov (United States)

    Gravani, M. N.; Hadjileontiadou, S. J.; Nikolaidou, G. N.; Hadjileontiadis, L. J.

    2007-01-01

    Studies have suggested that professional learning is influenced by two key parameters, i.e., climate and planning, and their associated variables (mutual respect, collaboration, mutual trust, supportiveness, openness). In this paper, we applied analysis of the relationships between the proposed quantitative, fuzzy logic-based model and a series of…

  13. Modelling Based Approach for Reconstructing Evidence of VOIP Malicious Attacks

    Directory of Open Access Journals (Sweden)

    Mohammed Ibrahim

    2015-05-01

    Full Text Available Voice over Internet Protocol (VoIP is a new communication technology that uses internet protocol in providing phone services. VoIP provides various forms of benefits such as low monthly fee and cheaper rate in terms of long distance and international calls. However, VoIP is accompanied with novel security threats. Criminals often take advantages of such security threats and commit illicit activities. These activities require digital forensic experts to acquire, analyses, reconstruct and provide digital evidence. Meanwhile, there are various methodologies and models proposed in detecting, analysing and providing digital evidence in VoIP forensic. However, at the time of writing this paper, there is no model formalized for the reconstruction of VoIP malicious attacks. Reconstruction of attack scenario is an important technique in exposing the unknown criminal acts. Hence, this paper will strive in addressing that gap. We propose a model for reconstructing VoIP malicious attacks. To achieve that, a formal logic approach called Secure Temporal Logic of Action(S-TLA+ was adopted in rebuilding the attack scenario. The expected result of this model is to generate additional related evidences and their consistency with the existing evidences can be determined by means of S-TLA+ model checker.

  14. Physics-based statistical learning approach to mesoscopic model selection

    Science.gov (United States)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  15. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  16. A mechanism-based approach for absorption modeling: the Gastro-Intestinal Transit Time (GITT) model.

    Science.gov (United States)

    Hénin, Emilie; Bergstrand, Martin; Standing, Joseph F; Karlsson, Mats O

    2012-06-01

    Absorption models used in the estimation of pharmacokinetic drug characteristics from plasma concentration data are generally empirical and simple, utilizing no prior information on gastro-intestinal (GI) transit patterns. Our aim was to develop and evaluate an estimation strategy based on a mechanism-based model for drug absorption, which takes into account the tablet movement through the GI transit. This work is an extension of a previous model utilizing tablet movement characteristics derived from magnetic marker monitoring (MMM) and pharmacokinetic data. The new approach, which replaces MMM data with a GI transit model, was evaluated in data sets where MMM data were available (felodipine) or not available (diclofenac). Pharmacokinetic profiles in both datasets were well described by the model according to goodness-of-fit plots. Visual predictive checks showed the model to give superior simulation properties compared with a standard empirical approach (first-order absorption rate + lag-time). This model represents a step towards an integrated mechanism-based NLME model, where the use of physiological knowledge and in vitro–in vivo correlation helps fully characterize PK and generate hypotheses for new formulations or specific populations.

  17. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  18. A novel micromechanics based approach in modeling pavement response

    Science.gov (United States)

    Bhattacharya, Arun

    For maintaining a smooth flow of traffic in the nation's highway system, sections of pavements that are damaged need to be serviced frequently. Among the various types of damage, those caused by heavy trucks are a major concern. Based on a detailed and broad literature survey, it is apparent that no analytical model exists which could closely predict dynamic pavement response and progressive damage, even qualitatively, due to truck loading. It is such a model that is developed in this work. In order to predict pavement response and damage analytically, a model will have to be based on a theory that captures the essential features of the pavement material. The state-of-the-art Microplane Theory, which has never been applied before to pavement, is chosen to model the material behavior in this research. The theory is implemented in a finite element code to predict tri-axial pavement response. The pavement material damage due to traffic loading is also presented qualitatively. Furthermore, using Taguchi Methods, the critical parameters in a pavement design are determined. Finally, the response of pavement to various joint designs parameters is evaluated.

  19. A Component-Based Debugging Approach for Detecting Structural Inconsistencies in Declarative Equation Based Models

    Institute of Scientific and Technical Information of China (English)

    Jian-Wan Ding; Li-Ping Chen; Fan-Li Zhou

    2006-01-01

    Object-oriented modeling with declarative equation based languages often unconsciously leads to structural inconsistencies. Component-based debugging is a new structural analysis approach that addresses this problem by analyzing the structure of each component in a model to separately locate faulty components. The analysis procedure is performed recursively based on the depth-first rule. It first generates fictitious equations for a component to establish a debugging environment, and then detects structural defects by using graph theoretical approaches to analyzing the structure of the system of equations resulting from the component. The proposed method can automatically locate components that cause the structural inconsistencies, and show the user detailed error messages. This information can be a great help in finding and localizing structural inconsistencies, and in some cases pinpoints them immediately.

  20. MDA-Based 3G Service Creation Approach and Telecom Service Domain Meta-Model

    Institute of Scientific and Technical Information of China (English)

    QIAO Xiu-quan; LI Xiao-feng; LI Yan

    2006-01-01

    This paper presents a model-driven 3G service creation approach based on model driven architecture technology.The focus of the paper is the methodology of designing telecommunication service-related meta-model and its profile implementation mechanism. This approach enhances the reusability of applications through separation of service logic models from concrete open application programming interface technologies and implementation technologies.

  1. A Unified Approach to Model-Based Planning and Execution

    Science.gov (United States)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  2. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... are preserved for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is...... that would be obtained. Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combinng atoms to form molecules with desired properties. The concept of the phenomena...

  3. Modelling of robotic work cells using agent based-approach

    Science.gov (United States)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  4. Analysis of Massive Emigration from Poland: The Model-Based Clustering Approach

    Science.gov (United States)

    Witek, Ewa

    The model-based approach assumes that data is generated by a finite mixture of probability distributions such as multivariate normal distributions. In finite mixture models, each component of probability distribution corresponds to a cluster. The problem of determining the number of clusters and choosing an appropriate clustering method becomes the problem of statistical model choice. Hence, the model-based approach provides a key advantage over heuristic clustering algorithms, because it selects both the correct model and the number of clusters.

  5. Glass viscosity calculation based on a global statistical modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  6. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  7. Embedded Control System Design A Model Based Approach

    CERN Document Server

    Forrai, Alexandru

    2013-01-01

    Control system design is a challenging task for practicing engineers. It requires knowledge of different engineering fields, a good understanding of technical specifications and good communication skills. The current book introduces the reader into practical control system design, bridging  the gap between theory and practice.  The control design techniques presented in the book are all model based., considering the needs and possibilities of practicing engineers. Classical control design techniques are reviewed and methods are presented how to verify the robustness of the design. It is how the designed control algorithm can be implemented in real-time and tested, fulfilling different safety requirements. Good design practices and the systematic software development process are emphasized in the book according to the generic standard IEC61508. The book is mainly addressed to practicing control and embedded software engineers - working in research and development – as well as graduate students who are face...

  8. Hybrid Modelling Approach to Prairie hydrology: Fusing Data-driven and Process-based Hydrological Models

    Science.gov (United States)

    Mekonnen, B.; Nazemi, A.; Elshorbagy, A.; Mazurek, K.; Putz, G.

    2012-04-01

    Modeling the hydrological response in prairie regions, characterized by flat and undulating terrain, and thus, large non-contributing areas, is a known challenge. The hydrological response (runoff) is the combination of the traditional runoff from the hydrologically contributing area and the occasional overflow from the non-contributing area. This study provides a unique opportunity to analyze the issue of fusing the Soil and Water Assessment Tool (SWAT) and Artificial Neural Networks (ANNs) in a hybrid structure to model the hydrological response in prairie regions. A hybrid SWAT-ANN model is proposed, where the SWAT component and the ANN module deal with the effective (contributing) area and the non-contributing area, respectively. The hybrid model is applied to the case study of Moose Jaw watershed, located in southern Saskatchewan, Canada. As an initial exploration, a comparison between ANN and SWAT models is established based on addressing the daily runoff (streamflow) prediction accuracy using multiple error measures. This is done to identify the merits and drawbacks of each modeling approach. It has been found out that the SWAT model has better performance during the low flow periods but with degraded efficiency during periods of high flows. The case is different for the ANN model as ANNs exhibit improved simulation during high flow periods but with biased estimates during low flow periods. The modelling results show that the new hybrid SWAT-ANN model is capable of exploiting the strengths of both SWAT and ANN models in an integrated framrwork. The new hybrid SWAT-ANN model simulates daily runoff quite satisfactorily with NSE measures of 0.80 and 0.83 during calibration and validation periods, respectively. Furthermore, an experimental assessment was performed to identify the effects of the ANN training method on the performance of the hybrid model as well as the parametric identifiability. Overall, the results obtained in this study suggest that the fusion

  9. Raster-Based Approach to Solar Pressure Modeling

    Science.gov (United States)

    Wright, Theodore W. II

    2013-01-01

    shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  10. Model-Based Approaches to Active Perception and Control

    Directory of Open Access Journals (Sweden)

    Giovanni Pezzulo

    2017-06-01

    Full Text Available There is an on-going debate in cognitive (neuro science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es” views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops.

  11. A model based wireless monitoring approach for traffic noise

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2011-01-01

    In order to have a good understanding of the environmental acoustic effects of traffic it is important to perform long term monitoring within large areas. With traditional monitoring approaches this is quite unfeasible and the costs are relatively high. Within TNO a new wireless monitoring approach

  12. A hybrid design-based and model-based sampling approach to estimate the temporal trend of spatial means

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2012-01-01

    This paper launches a hybrid sampling approach, entailing a design-based approach in space followed by a model-based approach in time, for estimating temporal trends of spatial means or totals. The underlying space–time process that generated the soil data is only partly described, viz. by a linear

  13. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  14. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  15. Physiology-based modelling approaches to characterize fish habitat suitability

    NARCIS (Netherlands)

    Teal, L.R.; Marras, Stefano; Peck, M.A.; Domenici, Paolo

    2015-01-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts

  16. Behavior-based network management: a unique model-based approach to implementing cyber superiority

    Science.gov (United States)

    Seng, Jocelyn M.

    2016-05-01

    Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.

  17. Excellent approach to modeling urban expansion by fuzzy cellular automata: agent base model

    Science.gov (United States)

    Khajavigodellou, Yousef; Alesheikh, Ali A.; Mohammed, Abdulrazak A. S.; Chapi, Kamran

    2014-09-01

    Recently, the interaction between humans and their environment is the one of important challenges in the world. Landuse/ cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. The complexity and dynamics of urban systems make the applicable practice of urban modeling very difficult. With the increased computational power and the greater availability of spatial data, micro-simulation such as the agent based and cellular automata simulation methods, has been developed by geographers, planners, and scholars, and it has shown great potential for representing and simulating the complexity of the dynamic processes involved in urban growth and land use change. This paper presents Fuzzy Cellular Automata in Geospatial Information System and remote Sensing to simulated and predicted urban expansion pattern. These FCA-based dynamic spatial urban models provide an improved ability to forecast and assess future urban growth and to create planning scenarios, allowing us to explore the potential impacts of simulations that correspond to urban planning and management policies. A fuzzy inference guided cellular automata approach. Semantic or linguistic knowledge on Land use change is expressed as fuzzy rules, based on which fuzzy inference is applied to determine the urban development potential for each pixel. The model integrates an ABM (agent-based model) and FCA (Fuzzy Cellular Automata) to investigate a complex decision-making process and future urban dynamic processes. Based on this model rapid development and green land protection under the influences of the behaviors and decision modes of regional authority agents, real estate developer agents, resident agents and non- resident agents and their interactions have been applied to predict the future development patterns of the Erbil metropolitan region.

  18. Monitoring the Ocean Acoustic Environment: A Model-Based Detection Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Sullivan, E.J.

    2000-03-13

    A model-based approach is applied in the development of a processor designed to passively monitor an ocean acoustic environment along with its associated variations. The technique employs an adaptive, model-based processor embedded in a sequential likelihood detection scheme. The trade-off between state-based and innovations-based monitor designs is discussed, conceptually. The underlying theory for the innovations-based design is briefly developed and applied to a simulated data set.

  19. An information theory-based approach to modeling the information processing of NPP operators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute, Taejon (Korea, Republic of)

    2002-08-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory.

  20. A crop model-based approach for sunflower yields

    Directory of Open Access Journals (Sweden)

    João Guilherme Dal Belo Leite

    2014-10-01

    Full Text Available Pushed by the Brazilian biodiesel policy, sunflower (Helianthus annuus L. production is becoming increasingly regarded as an option to boost farmers' income, particularly under semi-arid conditions. Biodiesel related opportunities increase the demand for decision-making information at different levels, which could be met by simulation models. This study aimed to evaluate the performance of the crop model OILCROP-SUN to simulate sunflower development and growth under Brazilian conditions and to explore sunflower water- and nitrogen-limited, water-limited and potential yield and yield variability over an array of sowing dates in the northern region of the state of Minas Gerais, Brazil. For model calibration, an experiment was conducted in which two sunflower genotypes (H358 and E122 were cultivated in a clayey soil. Growth components (leaf area index, above ground biomass, grain yield and development stages (crop phenology were measured. A database composed of 27 sunflower experiments from five Brazilian regions was used for model evaluation. The spatial yield distribution of sunflower was mapped using ordinary kriging in ArcGIS. The model simulated sunflower grain productivity satisfactorily (Root Mean Square Error ≈ 13 %. Simulated yields were relatively high (1,750 to 4,250 kg ha-1 and the sowing window was fairly wide (Oct to Feb for northwestern locations, where sunflower could be cultivated as a second crop (double cropping at the end of the rainy season. The hybrid H358 had higher yields for all simulated sowing dates, growth conditions and selected locations.

  1. Segmentation and Dimension Reduction: Exploratory and Model-Based Approaches

    NARCIS (Netherlands)

    J.M. van Rosmalen (Joost)

    2009-01-01

    textabstractRepresenting the information in a data set in a concise way is an important part of data analysis. A variety of multivariate statistical techniques have been developed for this purpose, such as k-means clustering and principal components analysis. These techniques are often based on the

  2. Modeling pedestrian's conformity violation behavior: a complex network based approach.

    Science.gov (United States)

    Zhou, Zhuping; Hu, Qizhou; Wang, Wei

    2014-01-01

    Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases.

  3. Model-Based approaches to Human-Automation Systems Design

    DEFF Research Database (Denmark)

    Jamieson, Greg A.; Andersson, Jonas; Bisantz, Ann

    2012-01-01

    Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However, the framewo......Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However......, the frameworks are often adapted from other purposes, usually applied to a limited range of problems, sometimes not fully described in the open literature, and rarely critically reviewed in a manner acceptable to proponents and critics alike. The present paper introduces a panel session wherein these proponents...

  4. Logistics-based Competition : A Business Model Approach

    OpenAIRE

    Kihlén, Tobias

    2007-01-01

    Logistics is increasingly becoming recognised as a source of competitive advantage, both in practice and in academia. The possible strategic impact of logistics makes it important to gain deeper insight into the role of logistics in the strategy of the firm. There is however a considerable research gap between the quite abstract strategy theory and logistics research. A possible tool to use in bridging this gap is identified in business model research. Therefore, the purpose of this dissertat...

  5. Bank networks and firm credit: an agent based model approach

    OpenAIRE

    Teixeira, Henrique Oliveira

    2016-01-01

    Starting from the idea that economic systems fall into complexity theory, where its many agents interact with each other without a central control and that these interactions are able to change the future behavior of the agents and the entire system, similar to a chaotic system we increase the model of Russo et al. (2014) to carry out three experiments focusing on the interaction between Banks and Firms in an artificial economy. The first experiment is relative to Relationship Banking wher...

  6. Bulk cavitation extent modeling: An energy-based approach

    Science.gov (United States)

    Esplin, J. James

    Bulk cavitation is a phenomenon that occurs when a negative-pressure or tension wave causes a liquid to rupture, or cavitate, over space. It is a process which causes resident microbubbles to grow to many times their original size, forming a bubble cloud. Such bubble clouds are observed in shallow underwater explosions, where negative-pressure waves are formed after shock waves reflect off the water surface; they are also observed in shock wave lithotripsy, shock wave histotripsy, ultrasonic cleaning, and other applications. Models had been developed for predicting the size and shape of such bulk cavitation regions. This work introduces a model that accounts for energy "lost" to bulk cavitation which in turn influences the extent that is dependent on the rate at which the passing negative-pressure wave dissipates. In-laboratory underwater experiments utilizing a spark source for high-amplitude pressure pulse generation, hydrophones and high-speed videography validate the energy transfer from tension wave to bubble cloud formation. These experiments are supplemented by computational fluid dynamics simulations. A cavitation absorption coefficient is introduced and parameterized for accurate prediction of cloud extent.

  7. Oscillator-based assistance of cyclical movements: model-based and model-free approaches

    NARCIS (Netherlands)

    Ronsse, Renaud; Lenzi, Tommaso; Vitiello, Nicola; Koopman, Bram; van Asseldonk, Edwin H.F.; de Rossi, Stefano Marco Maria; van den Kieboom, Jesse; van der Kooij, Herman; Carozza, Maria Chiara; IJspeert, Auke Jan

    2011-01-01

    In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot’s own encoders. The approach is

  8. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  9. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...

  10. Transferring Multi-Scale Approaches from 3d City Modeling to Ifc-Based Tunnel Modeling

    Science.gov (United States)

    Borrmann, A.; Kolbe, T. H.; Donaubauer, A.; Steuer, H.; Jubierre, J. R.

    2013-09-01

    A multi-scale representation of the built environment is required to provide information with the adequate level of detail (LoD) for different use cases and objectives. This applies not only to the visualization of city and building models, but in particular to their use in the context of planning and analysis tasks. While in the field of Geographic Information Systems, the handling of multi-scale representations is well established and understood, no formal approaches for incorporating multi-scale methods exist in the field of Building Information Modeling (BIM) so far. However, these concepts are much needed to better support highly dynamic planning processes that make use of very rough information about the facility under design in the early stages and provide increasingly detailed and fine-grained information in later stages. To meet these demands, this paper presents a comprehensive concept for incorporating multi-scale representations with infrastructural building information models, with a particular focus on the representation of shield tunnels. Based on a detailed analysis of the data modeling methods used in CityGML for capturing multiscale representations and the requirements present in the context of infrastructure planning projects, we discuss potential extensions to the BIM data model Industry Foundation Classes (IFC). Particular emphasis is put on providing means for preserving the consistency of the representation across the different Levels-of-Detail (LoD). To this end we make use of a procedural geometry description which makes it possible to define explicit dependencies between geometric entities on different LoDs. The modification of an object on a coarse level consequently results in an automated update of all dependent objects on the finer levels. Finally we discuss the transformation of the IFC-based multi-scale tunnel model into a CityGML compliant tunnel representation.

  11. A model-based systems approach to pharmaceutical product-process design and analysis

    DEFF Research Database (Denmark)

    Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    This is a perspective paper highlighting the need for systematic model-based design and analysis in pharmaceutical product-process development. A model-based framework is presented and the role, development and use of models of various types are discussed together with the structure of the models...... for the product and the process. The need for a systematic modelling framework is highlighted together with modelling issues related to model identification, adaptation and extension. In the area of product design and analysis, predictive models are needed with a wide application range. In the area of process...... synthesis and design, the use of generic process models from which specific process models can be generated, is highlighted. The use of a multi-scale modelling approach to extend the application range of the property models is highlighted as well. Examples of different types of process models, model...

  12. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  13. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  14. Model Based System Engineering Approach of a Lightweight Embedded TCP/IP

    Directory of Open Access Journals (Sweden)

    M. Z. Rashed

    2011-04-01

    Full Text Available The use of embedded software is growing very rapidly. Accessing the internet is a necessaryservice which has large range of applications in many fields. The Internet is based on TCP/IPwhich is a very important stack. Although TCP/IP is very important there is not a softwareengineering model describing it. The common method in modeling and describing TCP/IP is RFCswhich is not sufficient for software engineer and developers. Therefore there is a need for softwareengineering approach to help engineers and developers to customize their own web basedapplications for embedded systems.This research presents a model based system engineering approach of lightweight TCP/IP. Themodel contains the necessary phases for developing a lightweight TCP/IP for embedded systems.The proposed model is based on SysML as a model based system engineering language.

  15. On-line and Model-based Approaches to the Visual Control of Action

    OpenAIRE

    Zhao, Huaiyong; Warren, William H.

    2014-01-01

    Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the co...

  16. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  17. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Lenardo C. Silva

    2015-10-01

    Full Text Available Medical Cyber-Physical Systems (MCPS are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  18. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    Science.gov (United States)

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  19. A model-based combinatorial optimisation approach for energy-efficient processing of microalgae

    NARCIS (Netherlands)

    Slegers, P.M.; Koetzier, B.J.; Fasaei, F.; Wijffels, R.H.; Straten, van G.; Boxtel, van A.J.B.

    2014-01-01

    The analyses of algae biorefinery performance are commonly based on fixed performance data for each processing step. In this work, we demonstrate a model-based combinatorial approach to derive the design-specific upstream energy consumption and biodiesel yield in the production of biodiesel from mic

  20. A model-based combinatorial optimisation approach for energy-efficient processing of microalgae

    NARCIS (Netherlands)

    Slegers, P.M.; Koetzier, B.J.; Fasaei, F.; Wijffels, R.H.; Straten, van G.; Boxtel, van A.J.B.

    2014-01-01

    The analyses of algae biorefinery performance are commonly based on fixed performance data for each processing step. In this work, we demonstrate a model-based combinatorial approach to derive the design-specific upstream energy consumption and biodiesel yield in the production of biodiesel from

  1. Consequence Based Design. An approach for integrating computational collaborative models (Integrated Dynamic Models) in the building design phase

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    affect the design process and collaboration between building designers and simulationists. Within the limits of applying the approach of Consequence based design to five case studies, followed by documentation based on interviews, surveys and project related documentations derived from internal reports...... that secures validity and quality assurance with a simulationist while sustaining autonomous control of building design with the building designer. Consequence based design is defined by the specific use of integrated dynamic models. These models include the parametric capabilities of a visual programming tool...... relies on various advancements in the area of integrated dynamic models. It also relies on the application and test of the approach in practice to evaluate the Consequence based design and the use of integrated dynamic models. As a result, the Consequence based design approach has been applied in five...

  2. An Approach to Improve the Representation of the User Model in the Web-Based Systems

    Directory of Open Access Journals (Sweden)

    Yasser A. Nada

    2011-12-01

    Full Text Available A major shortcoming of content-based approaches exists in the representation of the user model. Content-based approaches often employ term vectors to represent each user’s interest. In doing so, they ignore the semantic relations between terms of the vector space model in which indexed terms are not orthogonal and often have semantic relatedness between one another. In this paper, we improve the representation of a user model during building user model in content-based approaches by performing these steps. First is the domain concept filtering in which concepts and items of interests are compared to the domain ontology to check the relevant items to our domain using ontology based semantic similarity. Second, is incorporating semantic content into the term vectors. We use word definitions and relations provided by WordNet to perform word sense disambiguation and employ domain-specific concepts as category labels for the semantically enhanced user models. The implicit information pertaining to the user behavior was extracted from click stream data or web usage sessions captured within the web server logs. Also, our proposed approach aims to update user model, we should analysis user's history query keywords. For a certain keyword, we extract the words which have the semantic relationships with the keyword and add them into the user interest model as nodes according to semantic relationships in the WordNet.

  3. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  4. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...... a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology. © 2009 IEEE....

  5. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology....... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  6. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  7. Flexibility on storage-release based distributed hydrologic modeling with object-oriented approach

    Science.gov (United States)

    Kang, Kwangmin; Merwade, Venkatesh; Chun, Jong Ahn; Timlin, Dennis

    2016-09-01

    With the availability of advanced hydrologic data in public domain such as remote sensed and climate change scenario data, there is a need for a modeling framework that is capable of using these data to simulate and extend hydrologic processes with multidisciplinary approaches for sustainable water resources management. To address this need, a storage-release based distributed hydrologic model (STORE DHM) is developed based on an object-oriented approach. The model is tested for demonstrating model flexibility and extensibility to know how to well integrate object-oriented approach to further hydrologic research issues, e.g., reconstructing missing precipitation in this study, without changing its main frame. Moreover, the STORE DHM is applied to simulate hydrological processes with multiple classes in the Nanticoke watershed. This study also describes a conceptual and structural framework of object-oriented inheritance and aggregation characteristics under the STORE DHM. In addition, NearestMP (missing value estimation based on nearest neighborhood regression) and KernelMP (missing value estimation based on Kernel Function) are proposed for evaluating STORE DHM flexibility. And then, STORE DHM runoff hydrographs compared with NearestMP and KernelMP runoff hydrographs. Overall results from these comparisons show promising hydrograph outputs generated by the proposed two classes. Consequently, this study suggests that STORE DHM with an object-oriented approach will be a comprehensive water resources modeling tools by adding additional classes for toward developing through its flexibility and extensibility.

  8. Modelling the creep behaviour of tempered martensitic steel based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, Surya Deo, E-mail: surya.yadav@tugraz.at [Institute of Materials Science and Welding, Graz University of Technology, Kopernikusgasse 24, A-8010 Graz (Austria); Sonderegger, Bernhard, E-mail: bernhard.sonderegger@tugraz.at [Institute of Materials Science and Welding, Graz University of Technology, Kopernikusgasse 24, A-8010 Graz (Austria); Stracey, Muhammad, E-mail: strmuh001@myuct.ac.za [Centre for Materials Engineering, Department of Mechanical Engineering, University of Cape Town, Cape Town (South Africa); Poletti, Cecilia, E-mail: cecilia.poletti@tugraz.at [Institute of Materials Science and Welding, Graz University of Technology, Kopernikusgasse 24, A-8010 Graz (Austria)

    2016-04-26

    In this work, we present a novel hybrid approach to describe and model the creep behaviour of tempered martensitic steels. The hybrid approach couples a physically based model with a continuum damage mechanics (CDM) model. The creep strain is modelled describing the motions of three categories of dislocations: mobile, dipole and boundary. The initial precipitate state is simulated using the thermodynamic software tool MatCalc. The particle radii and number densities are incorporated into the creep model in terms of Zener drag pressure. The Orowan's equation for creep strain rate is modified to account for tertiary creep using softening parameters related to precipitate coarsening and cavitation. For the first time the evolution of internal variables such as dislocation densities, glide velocities, effective stresses on dislocations, internal stress from the microstructure, subgrain size, pressure on subgrain boundaries and softening parameters is discussed in detail. The model is validated with experimental data of P92 steel reported in the literature.

  9. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    Science.gov (United States)

    2012-09-01

    always choose the end- points to determine the RUL bounds, however, in this case the UT does this automatically with the added benefit of be- ing able to...approaches for model-based prognostics. In Proceedings of the 2012 ieee aerospace conference. Edwards, D., Orchard , M. E., Tang, L., Goebel, K., & Vacht

  10. A Hybrid Approach to Combine Physically Based and Data-Driven Models in Simulating Sediment Transportation

    NARCIS (Netherlands)

    Sewagudde, S.

    2008-01-01

    The objective of this study is to develop a methodology for hybrid modelling of sedimentation in a coastal basin or large shallow lake where physically based and data driven approaches are combined. This research was broken down into three blocks. The first block explores the possibility of approxim

  11. An agent-based approach to model land-use change at a regional scale

    NARCIS (Netherlands)

    Valbuena, D.; Verburg, P.H.; Bregt, A.; Ligtenberg, A.

    2010-01-01

    Land-use/cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. A common approach to analyse and simulate LUCC as the result of individual decisions is agent-based modelling (ABM). However, ABM is often applied to simulate processes at local

  12. FAULT DIAGNOSIS APPROACH BASED ON HIDDEN MARKOV MODEL AND SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    LIU Guanjun; LIU Xinmin; QIU Jing; HU Niaoqing

    2007-01-01

    Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.

  13. Model-based approaches to deal with detectability: a comment on Hutto (2016)

    Science.gov (United States)

    Marques, Tiago A.; Thomas, Len; Kéry, Marc; Buckland, Steve T.; Borchers, David L.; Rexstad, Eric; Fewster, Rachel M.; MacKenzie, Darryl I.; Royle, Andy; Guillera-Arroita, Gurutzeta; Handel, Colleen M.; Pavlacky, David C.; Camp, Richard J.

    2017-01-01

    In a recent paper, Hutto (2016a) challenges the need to account for detectability when interpreting data from point counts. A number of issues with model-based approaches to deal with detectability are presented, and an alternative suggested: surveying an area around each point over which detectability is assumed certain. The article contains a number of false claims and errors of logic, and we address these here. We provide suggestions about appropriate uses of distance sampling and occupancy modeling, arising from an intersection of design- and model-based inference.

  14. A Novel Soft Sensor Modeling Approach Based on Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    Feng Rui(冯瑞); Song Chunlin; Zhang Yanzhu; Shao Huihe

    2004-01-01

    Artificial Neural Networks (ANNs) such as radial basis function neural networks (RBFNNs) have been successfully used in soft sensor modeling. However, the generalization ability of conventional ANNs is not very well. For this reason, we present a novel soft sensor modeling approach based on Support Vector Machines (SVMs). Since standard SVMs have the limitation of speed and size in training large data set, we hereby propose Least Squares Support Vector Machines (LS_SVMs) and apply it to soft sensor modeling. Systematic analysis is performed and the result indicates that the proposed method provides satisfactory performance with excellent approximation and generalization property. Monte Carlo simulations show that our soft sensor modeling approach achieves performance superior to the conventional method based on RBFNNs.

  15. THE EFECTIVENESS OF RHETORIC-BASED ESSAY WRITING TEACHING MODEL WITH CONTEXTUAL APPROACH

    Directory of Open Access Journals (Sweden)

    - Akbar

    2015-06-01

    Full Text Available This study aims to develop a rhetoric–based essay writing teaching model with contextual approach in order to improve essay writing skills of students in the English Department of the Education and Teaching Faculty of Lakidende University of Konawe. This instructional model was developed by using research and development. The results show that the model can improve students’ essay writing skills effectively.. It was done in experimental class of the Education and Teaching Faculty of Lakidende University of Konawe Southeast Sulawesi province of Indonesia with the score of 69,80. Thus, it can be concluded that the rhetoric–based essay writing teaching model with contextual approach that has been developed can improve the essay writing skills of students of English Department. It was proper.

  16. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    OpenAIRE

    Bunse, C.; Gross, H.G.; Peper, C. (Claudia)

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quality improvement, also for embedded systems. Along a similar line of argumentation, component-based software engineering is advocated. In order to investigate these claims, the MARMOT method was appli...

  17. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    Science.gov (United States)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  18. Expectation-driven interaction: a model based on Luhmann's contingency approach

    CERN Document Server

    Barber, M J; Buchinger, E; Cessac, B; Streit, L; Blanchard, Ph.

    2006-01-01

    We introduce an agent-based model of interaction, drawing on the contingency approach from Luhmann's theory of social systems. The agent interactions are defined by the exchange of distinct messages. Message selection is based on the history of the interaction and developed within the confines of the problem of double contingency. We examine interaction strategies in the light of the message-exchange description using analytical and computational methods.

  19. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Camelia Florela Voinea

    2014-07-01

    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.

  20. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Voinea, Camelia Florela

    2014-07-01

    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.  

  1. A tensorial approach to the inversion of group-based phylogenetic models.

    Science.gov (United States)

    Sumner, Jeremy G; Jarvis, Peter D; Holland, Barbara R

    2014-12-04

    Hadamard conjugation is part of the standard mathematical armoury in the analysis of molecular phylogenetic methods. For group-based models, the approach provides a one-to-one correspondence between the so-called "edge length" and "sequence" spectrum on a phylogenetic tree. The Hadamard conjugation has been used in diverse phylogenetic applications not only for inference but also as an important conceptual tool for thinking about molecular data leading to generalizations beyond strictly tree-like evolutionary modelling. For general group-based models of phylogenetic branching processes, we reformulate the problem of constructing a one-one correspondence between pattern probabilities and edge parameters. This takes a classic result previously shown through use of Fourier analysis and presents it in the language of tensors and group representation theory. This derivation makes it clear why the inversion is possible, because, under their usual definition, group-based models are defined for abelian groups only. We provide an inversion of group-based phylogenetic models that can implemented using matrix multiplication between rectangular matrices indexed by ordered-partitions of varying sizes. Our approach provides additional context for the construction of phylogenetic probability distributions on network structures, and highlights the potential limitations of restricting to group-based models in this setting.

  2. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    Science.gov (United States)

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.

  3. Fuzzy-Rule-Based Approach for Modeling Sensory Acceptabitity of Food Products

    Directory of Open Access Journals (Sweden)

    Olusegun Folorunso

    2009-04-01

    Full Text Available The prediction of product acceptability is often an additive effect of individual fuzzy impressions developed by a consumer on certain underlying attributes characteristic of the product. In this paper, we present the development of a data-driven fuzzy-rule-based approach for predicting the overall sensory acceptability of food products, in this case composite cassava-wheat bread. The model was formulated using the Takagi-Sugeno and Kang (TSK fuzzy modeling approach. Experiments with the model derived from sampled data were simulated on Windows 2000XP running on Intel 2Gh environment. The fuzzy membership function for the sensory scores is implemented in MATLAB 6.0 using the fuzzy logic toolkit, and weights of each linguistic attribute were obtained using a Correlation Coefficient formula. The results obtained are compared to those of human judgments. Overall assessments suggest that, if implemented, this approach will facilitate a better acceptability of cassava bread as well as nutritionally improved food.

  4. MODEL-BASED SOFTWARE ENGINEERING (MBSE AND ITS VARIOUS APPROACHES AND CHALLENGES

    Directory of Open Access Journals (Sweden)

    Reema Sandhu

    2015-10-01

    Full Text Available One of the goals of software design is to model a system in such a way that it is easily understandable. The use of model-based software development is increasingly popular due to recent advancements in modeling technology. Nowadays the tendency for software development is changing from manual coding to automatic code generation thus relieving the human from detailed coding. This is a response to the software crisis, in which the cost of hardware has decreased and conversely the cost of software development has increased sharply. This paper presents the drastic changes related to modeling, different approaches and important challenging issues hat recur in MBSD. New perspectives are provided on some fundamental issues, such as the distinctions between model-driven development and architecture-centric development, code generation, and Meta modeling. Achieving a positive future will require, however, specific advances in software modeling, code generation, and model-code consistency management.

  5. A Model-Based Approach to Object-Oriented Software Metrics

    Institute of Scientific and Technical Information of China (English)

    梅宏; 谢涛; 杨芙清

    2002-01-01

    The need to improve software productivity and software quality has put for-ward the research on software metrics technology and the development of software metrics toolto support related activities. To support object-oriented software metrics practice effectively,a model-based approach to object-oriented software metrics is proposed in this paper. Thisapproach guides the metrics users to adopt the quality metrics model to measure the object-oriented software products. The development of the model can be achieved by using a top-downapproach. This approach explicitly proposes the conception of absolute normalization computa-tion and relative normalization computation for a metrics model. Moreover, a generic softwaremetrics tool - Jade Bird Object-Oriented Metrics Tool (JBOOMT) is designed to implementthis approach. The parser-based approach adopted by the tool makes the information of thesource program accurate and complete for measurement. It supports various customizablehierarchical metrics models and provides a flexible user interface for users to manipulate themodels. It also supports absolute and relative normalization mechanisms in different situations.

  6. Sequence-Based Pronunciation Variation Modeling for Spontaneous ASR Using a Noisy Channel Approach

    Science.gov (United States)

    Hofmann, Hansjörg; Sakti, Sakriani; Hori, Chiori; Kashioka, Hideki; Nakamura, Satoshi; Minker, Wolfgang

    The performance of English automatic speech recognition systems decreases when recognizing spontaneous speech mainly due to multiple pronunciation variants in the utterances. Previous approaches address this problem by modeling the alteration of the pronunciation on a phoneme to phoneme level. However, the phonetic transformation effects induced by the pronunciation of the whole sentence have not yet been considered. In this article, the sequence-based pronunciation variation is modeled using a noisy channel approach where the spontaneous phoneme sequence is considered as a “noisy” string and the goal is to recover the “clean” string of the word sequence. Hereby, the whole word sequence and its effect on the alternation of the phonemes will be taken into consideration. Moreover, the system not only learns the phoneme transformation but also the mapping from the phoneme to the word directly. In this study, first the phonemes will be recognized with the present recognition system and afterwards the pronunciation variation model based on the noisy channel approach will map from the phoneme to the word level. Two well-known natural language processing approaches are adopted and derived from the noisy channel model theory: Joint-sequence models and statistical machine translation. Both of them are applied and various experiments are conducted using microphone and telephone of spontaneous speech.

  7. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  8. Multidisciplinary Approach to Flood Forecasting on the Base of Earth Observation Data and Hydrological Modelling

    Science.gov (United States)

    Zelentsov, Viacheslav; Potryasaev, Semen; Sokolov, Boris

    2016-08-01

    In this paper a new approach to the creation of short- term forecasting systems of river flooding is being further developed. It provides highly accurate forecasting results due to operative obtaining and integrated processing of the remote sensing and ground- based water flow data in real time. Forecasting of flood areas and depths is performed on a time interval of 12 to 48 hours to be able to perform the necessary steps to alert and evacuate the population. Forecast results are available as web services. The proposed system extends traditional separate methods based on satellite monitoring or modeling of a river's physical processes, by using an interdisciplinary approach, integration of different models and technologies, and through intelligent choice of the most suitable models for a flood forecasting.

  9. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  10. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    Science.gov (United States)

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics.

  11. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  12. Formalising and acquiring model-based hypertext in medicine: an integrative approach.

    Science.gov (United States)

    Spreckelsen, C; Spitzer, K

    1998-09-01

    Combining a knowledge acquisition methodology with a powerful data model we present an approach to the acquisition, maintenance and browsing of scientific medical hypertext. The hypergraph-based data model supports the consistent treatment of cyclic data structures, the nesting of complex object and provides an elegant way of path declaration to represent time-dependent medical processes or large hypertext tours. It encourages a stepwise schema design and therefore supports a spiral-shaped acquisition process. We formally define view mechanisms on the basis of a rule-based query and modification language. The views enable a context-sensitive presentation of medical knowledge according to the informational needs of the physician. Our approach has been applied to the implementation of an authoring and tutoring environment for a computer-based hypermedia reference book for cerebrovascular diseases (NeuroN). During the acquisition process the expressive power and flexibility of the representational formats have been evaluated.

  13. Environmental stress level evaluation approach based on physical model and interval grey association degree

    Institute of Scientific and Technical Information of China (English)

    Deng Guanqian; Qiu Jing; Liu Guanjun; Lv Kehong

    2013-01-01

    Associating environmental stresses (ESs) with built-in test (BIT) output is an important means to help diagnose intermittent faults (IFs).Aiming at low efficiency in association of traditional time stress measurement device (TSMD),an association model is built.Thereafter,a novel approach is given to evaluate the integrated environmental stress (IES) level.Firstly,the selection principle and approach of main environmental stresses (MESs) and key characteristic parameters (KCPs) are presented based on fault mode,mechanism,and ESs analysis (FMMEA).Secondly,reference stress events (RSEs) are constructed by dividing IES into three stress levels according to its impact on faults; and then the association model between integrated environmental stress event (IESE) and BIT output is built.Thirdly,an interval grey association approach to evaluate IES level is proposed due to the interval number of IES value.Consequently,the association output can be obtained as well.Finally,a case study is presented to demonstrate the proposed approach.Results show the proposed model and approach are effective and feasible.This approach can be used to guide ESs measure,record,and association.It is well suited for on-line assistant diagnosis of faults,especially IFs.

  14. Modelling and simulation of electrical energy systems through a complex systems approach using agent-based models

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, Enrique

    2013-10-01

    Complexity science aims to better understand the processes of both natural and man-made systems which are composed of many interacting entities at different scales. A disaggregated approach is proposed for simulating electricity systems, by using agent-based models coupled to continuous ones. The approach can help in acquiring a better understanding of the operation of the system itself, e.g. on emergent phenomena or scale effects; as well as in the improvement and design of future smart grids.

  15. Forecasting Helicoverpa populations in Australia: A comparison of regression based models and a bioclimatic based modelling approach

    Institute of Scientific and Technical Information of China (English)

    MYRONP.ZALUCKI; MICHAELJ.FURLONG

    2005-01-01

    Long-term forecasts of pest pressure are central to the effective management of many agricultural insect pests. In the eastern cropping regions of Australia, serious infestations of Helicoverpa punctigera (Wallenglen) and H. armigera (Hübner)(Lepidoptera:Noctuidae) are experienced annually. Regression analyses of a long series of light-trap catches of adult moths were used to describe the seasonal dynamics of both species. The size of the spring generation in eastern cropping zones could be related to rainfall in putative source areas in inland Australia. Subsequent generations could be related to the abundance of various crops in agricultural areas, rainfall and the magnitude of the spring population peak. As rainfall figured prominently as a predictor variable, and can itself be predicted using the Southern Oscillation Index (SOI), trap catches were also related to this variable. The geographic distribution of each species was modelled in relation to climate and CLIMEX was used to predict temporal variation in abundance at given putative source sites in inland Australia using historical meteorological data. These predictions were then correlated with subsequent pest abundance data in a major cropping region. The regression-based and bioclimatic-based approaches to predicting pest abundance are compared and their utility in predicting and interpreting pest dynamics are discussed.

  16. A Hydrological Model To Bridge The Gap Between Conceptual and Physically Based Approaches

    Science.gov (United States)

    Lempert, M.; Ostrowski, M.; Blöschl, G.

    In the last decade it has become evident that models are needed to account for more realistic physical assumptions and for improved data availability and computational facilities. In general it seems to be a dominant objective to better account for nonlin- earity and for less uncertain parameter identification. This allows its application also to ungaged catchments. To account for these objectives and for improved computa- tional boundary conditions a new model has been developed, tested and validated at Darmstadt University of Technology. The model is a quasi non linear model, it uses GIS provided data and includes physically based (not physical) model parameters, quite readily available from digitally stored information. Surface runoff determined after physically based non linear soil moisture modelling is routed with the kinematic cascade approach according to digital elevation grid models while sub-surface flow is routed through linear conceptual modules. The model uses generally accepted param- eters for soil moisture modelling including vegetation canopy such as total porosity, field cvapacity, wilting point, hydraulic conductivities and leaf area index and canopy coverage. The model has been successfully applied to several test sites and catchments at local, micro and lower macro scales. It is the objective of the paper to - explain the background of model development - briefly explain algorithms - discuss model parameter identification - present case study results

  17. Combining FDI and AI approaches within causal-model-based diagnosis.

    Science.gov (United States)

    Gentil, Sylviane; Montmain, Jacky; Combastel, Christophe

    2004-10-01

    This paper presents a model-based diagnostic method designed in the context of process supervision. It has been inspired by both artificial intelligence and control theory. AI contributes tools for qualitative modeling, including causal modeling, whose aim is to split a complex process into elementary submodels. Control theory, within the framework of fault detection and isolation (FDI), provides numerical models for generating and testing residuals, and for taking into account inaccuracies in the model, unknown disturbances and noise. Consistency-based reasoning provides a logical foundation for diagnostic reasoning and clarifies fundamental assumptions, such as single fault and exoneration. The diagnostic method presented in the paper benefits from the advantages of all these approaches. Causal modeling enables the method to focus on sufficient relations for fault isolation, which avoids combinatorial explosion. Moreover, it allows the model to be modified easily without changing any aspect of the diagnostic algorithm. The numerical submodels that are used to detect inconsistency benefit from the precise quantitative analysis of the FDI approach. The FDI models are studied in order to link this method with DX component-oriented reasoning. The recursive on-line use of this algorithm is explained and the concept of local exoneration is introduced.

  18. Hysteresis Nonlinearity Identification Using New Preisach Model-Based Artificial Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Zakerzadeh

    2011-01-01

    Full Text Available Preisach model is a well-known hysteresis identification method in which the hysteresis is modeled by linear combination of hysteresis operators. Although Preisach model describes the main features of system with hysteresis behavior, due to its rigorous numerical nature, it is not convenient to use in real-time control applications. Here a novel neural network approach based on the Preisach model is addressed, provides accurate hysteresis nonlinearity modeling in comparison with the classical Preisach model and can be used for many applications such as hysteresis nonlinearity control and identification in SMA and Piezo actuators and performance evaluation in some physical systems such as magnetic materials. To evaluate the proposed approach, an experimental apparatus consisting one-dimensional flexible aluminum beam actuated with an SMA wire is used. It is shown that the proposed ANN-based Preisach model can identify hysteresis nonlinearity more accurately than the classical one. It also has powerful ability to precisely predict the higher-order hysteresis minor loops behavior even though only the first-order reversal data are in use. It is also shown that to get the same precise results in the classical Preisach model, many more data should be used, and this directly increases the experimental cost.

  19. Agent-based Ecological Model Calibration - on the Edge of a New Approach

    CERN Document Server

    Pereira, Antonio; Reis, Luis Paulo

    2008-01-01

    The purpose of this paper is to present a new approach to ecological model calibration -- an agent-based software. This agent works on three stages: 1- It builds a matrix that synthesizes the inter-variable relationships; 2- It analyses the steady-state sensitivity of different variables to different parameters; 3- It runs the model iteratively and measures model lack of fit, adequacy and reliability. Stage 3 continues until some convergence criteria are attained. At each iteration, the agent knows from stages 1 and 2, which parameters are most likely to produce the desired shift on predicted results.

  20. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)

    2012-07-01

    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  1. Comprehensive Stability Evaluation of Rock Slope Using the Cloud Model-Based Approach

    Science.gov (United States)

    Liu, Zaobao; Shao, Jianfu; Xu, Weiya; Xu, Fei

    2014-11-01

    This article presents the cloud model-based approach for comprehensive stability evaluation of complicated rock slopes of hydroelectric stations in mountainous area. This approach is based on membership cloud models which can account for randomness and fuzziness in slope stability evaluation. The slope stability is affected by various factors and each of which is ranked into five grades. The ranking factors are sorted into four categories. The ranking system of slope stability is introduced and then the membership cloud models are applied to analyze each ranking factor for generating cloud memberships. Afterwards, the obtained cloud memberships are synthesized with the factor weights given by experts for comprehensive stability evaluation of rock slopes. The proposed approach is used for the stability evaluation of the left abutment slope in Jinping 1 Hydropower Station. It is shown that the cloud model-based strategy can well consider the effects of each ranking factor and therefore is feasible and reliable for comprehensive stability evaluation of rock slopes.

  2. A survey on model based approaches for 2D and 3D visual human pose recovery.

    Science.gov (United States)

    Perez-Sala, Xavier; Escalera, Sergio; Angulo, Cecilio; Gonzàlez, Jordi

    2014-03-03

    Human Pose Recovery has been studied in the field of Computer Vision for the last 40 years. Several approaches have been reported, and significant improvements have been obtained in both data representation and model design. However, the problem of Human Pose Recovery in uncontrolled environments is far from being solved. In this paper, we define a general taxonomy to group model based approaches for Human Pose Recovery, which is composed of five main modules: appearance, viewpoint, spatial relations, temporal consistence, and behavior. Subsequently, a methodological comparison is performed following the proposed taxonomy, evaluating current SoA approaches in the aforementioned five group categories. As a result of this comparison, we discuss the main advantages and drawbacks of the reviewed literature.

  3. Use of an ecologically relevant modelling approach to improve remote sensing-based schistosomiasis risk profiling.

    Science.gov (United States)

    Walz, Yvonne; Wegmann, Martin; Leutner, Benjamin; Dech, Stefan; Vounatsou, Penelope; N'Goran, Eliézer K; Raso, Giovanna; Utzinger, Jürg

    2015-01-01

    Schistosomiasis is a widespread water-based disease that puts close to 800 million people at risk of infection with more than 250 million infected, mainly in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and the frequency, duration and extent of human bodies exposed to infested water sources during human water contact. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. Since schistosomiasis risk profiling based on remote sensing data inherits a conceptual drawback if school-based disease prevalence data are directly related to the remote sensing measurements extracted at the location of the school, because the disease transmission usually does not exactly occur at the school, we took the local environment around the schools into account by explicitly linking ecologically relevant environmental information of potential disease transmission sites to survey measurements of disease prevalence. Our models were validated at two sites with different landscapes in Côte d'Ivoire using high- and moderate-resolution remote sensing data based on random forest and partial least squares regression. We found that the ecologically relevant modelling approach explained up to 70% of the variation in Schistosoma infection prevalence and performed better compared to a purely pixel-based modelling approach. Furthermore, our study showed that model performance increased as a function of enlarging the school catchment area, confirming the hypothesis that suitable environments for schistosomiasis transmission rarely occur at the location of survey measurements.

  4. Variance analysis for model updating with a finite element based subspace fitting approach

    Science.gov (United States)

    Gautier, Guillaume; Mevel, Laurent; Mencik, Jean-Mathieu; Serra, Roger; Döhler, Michael

    2017-07-01

    Recently, a subspace fitting approach has been proposed for vibration-based finite element model updating. The approach makes use of subspace-based system identification, where the extended observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating the model-based observability matrix with the estimated one, by using a single set of experimental data. Hence, the updated finite element model only reflects this single test case. However, estimates from vibration measurements are inherently exposed to uncertainty due to unknown excitation, measurement noise and finite data length. In this paper, a covariance estimation procedure for the updated model parameters is proposed, which propagates the data-related covariance to the updated model parameters by considering a first-order sensitivity analysis. In particular, this propagation is performed through each iteration step of the updating minimization problem, by taking into account the covariance between the updated parameters and the data-related quantities. Simulated vibration signals are used to demonstrate the accuracy and practicability of the derived expressions. Furthermore, an application is shown on experimental data of a beam.

  5. A compressive sensing-based approach for Preisach hysteresis model identification

    Science.gov (United States)

    Zhang, Jun; Torres, David; Sepúlveda, Nelson; Tan, Xiaobo

    2016-07-01

    The Preisach hysteresis model has been adopted extensively in magnetic and smart material-based systems. Fidelity of the model hinges on accurate identification of the Preisach density function. Existing work on the identification of the density function usually involves applying an input that provides sufficient excitation and measuring a large set of output data. In this paper, we propose a novel compressive sensing-based approach for Preisach model identification that requires fewer measurements. The proposed approach adopts the discrete cosine transform of the output data to obtain a sparse vector, where the order of all the output data is assumed to be known. The model parameters can be efficiently reconstructed using the proposed scheme. For comparison purposes, a constrained least-squares scheme using the same number of measurements is also considered. The root-mean-square error is adopted to examine the model identification performance. The proposed identification approach is shown to have better performance than the least-squares scheme through both simulation and experiments involving a vanadium dioxide ({{VO}}2)-integrated microactuator. This work was supported by the National Science Foundation (CMMI 1301243).

  6. A Game-Based Approach for PCTL* Stochastic Model Checking with Evidence

    Institute of Scientific and Technical Information of China (English)

    Yang Liu; Xuan-Dong Li; Yan Ma

    2016-01-01

    Stochastic model checking is a recent extension and generalization of the classical model checking, which focuses on quantitatively checking the temporal property of a system model. PCTL* is one of the important quantitative property specification languages, which is strictly more expressive than either PCTL (probabilistic computation tree logic) or LTL (linear temporal logic) with probability bounds. At present, PCTL* stochastic model checking algorithm is very complicated, and cannot provide any relevant explanation of why a formula does or does not hold in a given model. For dealing with this problem, an intuitive and succinct approach for PCTL* stochastic model checking with evidence is put forward in this paper, which includes: presenting the game semantics for PCTL* in release-PNF (release-positive normal form), defining the PCTL*stochastic model checking game, using strategy solving in game to achieve the PCTL*stochastic model checking, and refining winning strategy as the evidence to certify stochastic model checking result. The soundness and the completeness of game-based PCTL* stochastic model checking are proved, and its complexity matches the known lower and upper bounds. The game-based PCTL*stochastic model checking algorithm is implemented in a visual prototype tool, and its feasibility is demonstrated by an illustrative example.

  7. A theoretical approach to room acoustic simulations based on a radiative transfer model

    DEFF Research Database (Denmark)

    Ruiz-Navarro, Juan-Miguel; Jacobsen, Finn; Escolano, José

    2010-01-01

    A theoretical approach to room acoustic simulations based on a radiative transfer model is developed by adapting the classical radiative transfer theory from optics to acoustics. The proposed acoustic radiative transfer model expands classical geometrical room acoustic modeling algorithms...... by incorporating a propagation medium that absorbs and scatters radiation, handling both diffuse and non-diffuse reflections on boundaries and objects in the room. The main scope of this model is to provide a proper foundation for a wide number of room acoustic simulation models, in order to establish and unify...... their principles. It is shown that this room acoustic modeling technique establishes the basis of two recently proposed algorithms, the acoustic diffusion equation and the room acoustic rendering equation. Both methods are derived in detail using an analytical approximation and a simplified integral equation...

  8. Analysis of factors affecting satisfaction level on problem based learning approach using structural equation modeling

    Science.gov (United States)

    Hussain, Nur Farahin Mee; Zahid, Zalina

    2014-12-01

    Nowadays, in the job market demand, graduates are expected not only to have higher performance in academic but they must also be excellent in soft skill. Problem-Based Learning (PBL) has a number of distinct advantages as a learning method as it can deliver graduates that will be highly prized by industry. This study attempts to determine the satisfaction level of engineering students on the PBL Approach and to evaluate their determinant factors. The Structural Equation Modeling (SEM) was used to investigate how the factors of Good Teaching Scale, Clear Goals, Student Assessment and Levels of Workload affected the student satisfaction towards PBL approach.

  9. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    Science.gov (United States)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  10. Radiation induced dissolution of UO 2 based nuclear fuel - A critical review of predictive modelling approaches

    Science.gov (United States)

    Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats

    2012-01-01

    Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.

  11. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2010-08-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different surface grids and river nodes are modeled using one-dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R-squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  12. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2011-04-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different land grids and river nodes are modeled using one dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R–squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  13. Modelling and simulation of complex systems: an approach based on multi-level agents

    CERN Document Server

    Fougères, Alain-Jérôme

    2012-01-01

    A complex system is made up of many components with many interactions. So the design of systems such as simulation systems, cooperative systems or assistance systems includes a very accurate modelling of interactional and communicational levels. The agent-based approach provides an adapted abstraction level for this problem. After having studied the organizational context and communicative capacities of agentbased systems, to simulate the reorganization of a flexible manufacturing, to regulate an urban transport system, and to simulate an epidemic detection system, our thoughts on the interactional level were inspired by human-machine interface models, especially those in "cognitive engineering". To provide a general framework for agent-based complex systems modelling, we then proposed a scale of four behaviours that agents may adopt in their complex systems (reactive, routine, cognitive, and collective). To complete the description of multi-level agent models, which is the focus of this paper, we illustrate ...

  14. Application of Transfer Matrix Approach to Modeling and Decentralized Control of Lattice-Based Structures

    Science.gov (United States)

    Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea

    2015-01-01

    This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.

  15. A model for strong interactions at high energy based on the CGC/saturation approach

    Energy Technology Data Exchange (ETDEWEB)

    Gotsman, E.; Maor, U. [Tel Aviv University, Department of Particle Physics, School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Science, Tel Aviv (Israel); Levin, E. [Tel Aviv University, Department of Particle Physics, School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Science, Tel Aviv (Israel); Universidad Tecnica Federico Santa Maria and Centro Cientifico-Tecnologico de Valparaiso, Departamento de Fisica, Valparaiso (Chile)

    2015-01-01

    We present our first attempt to develop a model for soft interactions at high energy, based on the BFKL Pomeron and the CGC/saturation approach. We construct an eikonal-type model, whose opacity is determined by the exchange of the dressed BFKL Pomeron. The Green function of the Pomeron is calculated in the framework of the CGC/saturation approach. Using five parameters we achieve a reasonable description of the experimental data at high energies (W ≥ 0.546TeV) with overall χ{sup 2}/d.o.f. ∼ 2. The model results in different behavior for the single- and double-diffraction cross sections at high energies.The singlediffraction cross section reaches a saturated value (about 10mb) at high energies, while the double-diffraction cross section continues growing slowly. (orig.)

  16. A model for strong interactions at high energy based on the CGC/saturation approach

    CERN Document Server

    Gotsman, E; Maor, U

    2014-01-01

    We present our first attempt to develop a model for soft interactions at high energy, based on the BFKL Pomeron and the CGC/saturation approach. We construct an eikonal-type model, whose opacity is determined by the exchange of the dressed BFKL Pomeron. The Green's function of the Pomeron is calculated in the framework of the CGC/saturation approach. Using five parameters we achieve a good description of the experimental data at high energies ( $W\\,\\geq\\,0.546\\,TeV$). The model results in different behaviour for the single and double diffraction cross sections at high energies. The single diffraction cross section reaches a saturated value (about 10 mb) at high energies, while the double diffraction cross section continues growing slowly

  17. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  18. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Deepak Bhatt

    2012-07-01

    Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.

  19. MATLAB/Simulink Based Study of Different Approaches Using Mathematical Model of Differential Equations

    National Research Council Canada - National Science Library

    Vijay Nehra

    2014-01-01

    .... The present paper addresses different approaches used to derive mathematical models of first and second order system, developing MATLAB script implementation and building a corresponding Simulink model...

  20. Thermal modelling of the high temperature treatment of wood based on Luikov's approach

    Energy Technology Data Exchange (ETDEWEB)

    Younsi, R.; Kocaefe, D.; Poncsak, S.; Kocaefe, Y. [University of Quebec, Chicoutimi (Canada). Dept. of Applied Sciences

    2005-07-01

    A 3D, unsteady-state mathematical model was used to simulate the behaviour of wood during high temperature treatment. The model is based on Luikov's approach and solves a set of coupled heat and mass transfer equations. Using the model, the temperature and moisture content profiles of wood were predicted as a function of time for different heating rates. Parallel to the modelling study, an experimental study was carried out using small birch samples. The samples were subjected to high temperature treatment in a thermogravimetric system under different operating conditions. The experimental results and the model predictions were found to be in good agreement. The results show that the distributions of temperature and moisture content are influenced appreciably by the heating rate and the initial moisture content. (author)

  1. Application of meandering centreline migration modelling and object-based approach of Long Nab member

    Science.gov (United States)

    Saadi, Saad

    2017-04-01

    Characterizing the complexity and heterogeneity of the geometries and deposits in meandering river system is an important concern for the reservoir modelling of fluvial environments. Re-examination of the Long Nab member in the Scalby formation of the Ravenscar Group (Yorkshire, UK), integrating digital outcrop data and forward modelling approaches, will lead to a geologically realistic numerical model of the meandering river geometry. The methodology is based on extracting geostatistics from modern analogous, meandering rivers that exemplify both the confined and non-confined meandering point bars deposits and morphodynamics of Long Nab member. The parameters derived from the modern systems (i.e. channel width, amplitude, radius of curvature, sinuosity, wavelength, channel length and migration rate) are used as a statistical control for the forward simulation and resulting object oriented channel models. The statistical data derived from the modern analogues is multi-dimensional in nature, making analysis difficult. We apply data mining techniques such as parallel coordinates to investigate and identify the important relationships within the modern analogue data, which can then be used drive the development of, and as input to the forward model. This work will increase our understanding of meandering river morphodynamics, planform architecture and stratigraphic signature of various fluvial deposits and features. We will then use these forward modelling based channel objects to build reservoir models, and compare the behaviour of the forward modelled channels with traditional object modelling in hydrocarbon flow simulations.

  2. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  3. On the impact of information delay on location-based relaying: a markov modeling approach

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova;

    2012-01-01

    For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability...... of the collected information and hence will influence the performance of the relay selection method. This paper analyzes this influence in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. The model is used to obtain optimal relay...

  4. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  5. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  6. An Array-based Approach to Modelling Production Management System Architectures

    DEFF Research Database (Denmark)

    Falster, Peter

    2000-01-01

    Several proposals to a conceptual framework for production management architecture are briefly reviewed. It is suggested that an array-based approach and a classic engineering-economic model, is used as tools for a conceptualisation of ideas. Traditional architectural design is usually based...... on a geometrical thinking. Accordingly, elements from measurement and array theory are introduced, but in a more abstract way than traditionally connected with 3D-geometry. The paper concludes that a few set of concepts, like products, resources, activities, events, stages, etc. can be synthesized and analogies...

  7. A model-based approach to associate complexity and robustness in engineering systems

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; D. Frey, Daniel; Howard, Thomas J.

    2017-01-01

    Ever increasing functionality and complexity of products and systems challenge development companies in achieving high and consistent quality. A model-based approach is used to investigate the relationship between system complexity and system robustness. The measure for complexity is based......-is-best requirements, the robustness is most affected by the level of contradiction between coupled functional requirements (p = 1.4e−36). In practice, the results imply that if the main influencing factors for each function in a system are known in the concept phase, an evaluation of the contradiction level can...

  8. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  9. Evaluation of Creep-Fatigue Damage Based on Simplified Model Test Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yanli [ORNL; Li, Tianlei [ORNL; Sham, Sam [ORNL; Jetter, Robert I [Consultant

    2013-01-01

    Current methods used in the ASME Code, Subsection NH for the evaluation of creep-fatigue damage are based on the separation of elevated temperature cyclic damage into two parts, creep damage and fatigue damage. This presents difficulties in both evaluation of test data and determination of cyclic damage in design. To avoid these difficulties, an alternative approach was identified, called the Simplified Model Test or SMT approach based on the use of creep-fatigue hold time test data from test specimens with elastic follow-up conservatively designed to bound the response of general structural components of interest. A key feature of the methodology is the use of the results of elastic analysis directly in design evaluation similar to current methods in the ASME Code, Subsection NB. Although originally developed for current material included in Subsection NH, recent interest in the application of Alloy 617 for components operating at very high temperatures has caused renewed interest in the SMT approach because it provides an alternative to the proposed restriction on the use of current Subsection NH simplified methods at very high temperatures. A comprehensive review and assessment of five representative simplified methods for creep-fatigue damage evaluation is presented in Asayama [1]. In this review the SMT methodology was identified as the best long term approach but the need for test data precluded its near term implementation. Asayama and Jetter [2] is a summary of the more comprehensive report by Asayama [1] with a summary of the SMT approach presented by Jetter [3].

  10. A Petri net-based approach for supporting aspect-oriented modeling

    Institute of Scientific and Technical Information of China (English)

    Lianwei GUAN; Xingyu LI; Hao HU; Jian LU

    2008-01-01

    The concept of aspect-orientation allows for modularizing crosscutting concerns as aspect modules. Aspect-orientation originally emerged at the program-ming level, and has stretched over other development phases now. Among them aspect-oriented modeling (AOM) is a hot topic, and there are many approaches supporting it. Petri net is a good formalism which can provide the foundations for modeling software and simulating its execution, but fails to resolve the prob-lem of crosscutting concerns to support AOM. So, this paper presents an approach which extends the Petri net so as to support the AOM. In this paper, the basic functions of the system are modeled as base net by Petri net, and the crosscutting concerns are modeled as aspect nets. In order to analyze the whole system, woven mechanism is proposed to compose the aspect nets and base net together. The problems about aspect-aspect conflict and conflict relations may exist among the aspect nets matching the shared join point, thus this paper propose solutions to resolve them. The Object Petri net which is an extension of traditional Petri net is also extended so as to support aspect-oriented mod-eling here.

  11. Bounded Rational Managers Struggle with Talent Management - An Agent-based Modelling Approach

    DEFF Research Database (Denmark)

    Adamsen, Billy; Thomsen, Svend Erik

    of past success will provide failure rather than success in the future (Capelli.2008). Finally, we model the talent selection process either as a collective decision making process made by a group of managers or a decision process made by a single manager.It is argued that agent-based modeling is a useful....... The considered variables were: (a) decision makers’ attributes (capabilities and degree of bounded rationality), (b) characteristics of the sample where individuals are selected from (the level of capabilities and the dispersion thereof), (c) path-dependency of the organization’s success, and (d) the decision......This study applies an agent-based modeling approach to explore some aspects of an important managerial task: finding and cultivating talented individuals capable of creating value for their organization at some future state. Given that the term talent in talent management is an empty signifier...

  12. Analyzing energy consumption of wireless networks. A model-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Yue, Haidi

    2013-03-04

    During the last decades, wireless networking has been continuously a hot topic both in academy and in industry. Many different wireless networks have been introduced like wireless local area networks, wireless personal networks, wireless ad hoc networks, and wireless sensor networks. If these networks want to have a long term usability, the power consumed by the wireless devices in each of these networks needs to be managed efficiently. Hence, a lot of effort has been carried out for the analysis and improvement of energy efficiency, either for a specific network layer (protocol), or new cross-layer designs. In this thesis, we apply model-based approach for the analysis of energy consumption of different wireless protocols. The protocols under consideration are: one leader election protocol, one routing protocol, and two medium access control protocols. By model-based approach we mean that all these four protocols are formalized as some formal models, more precisely, as discrete-time Markov chains (DTMCs), Markov decision processes (MDPs), or stochastic timed automata (STA). For the first two models, DTMCs and MDPs, we model them in PRISM, a prominent model checker for probabilistic model checking, and apply model checking technique to analyze them. Model checking belongs to the family of formal methods. It discovers exhaustively all possible (reachable) states of the models, and checks whether these models meet a given specification. Specifications are system properties that we want to study, usually expressed by some logics, for instance, probabilistic computer tree logic (PCTL). However, while model checking relies on rigorous mathematical foundations and automatically explores the entire state space of a model, its applicability is also limited by the so-called state space explosion problem -- even systems of moderate size often yield models with an exponentially larger state space that thwart their analysis. Hence for the STA models in this thesis, since there

  13. A novel approach of testability modeling and analysis for PHM systems based on failure evolution mechanism

    Institute of Scientific and Technical Information of China (English)

    Tan Xiaodong; Qiu Jing; Liu Guanjun; Lv Kehong; Yang Shuming; Wang Chao

    2013-01-01

    Prognostics and health management (PHM) significantly improves system availability and reliability,and reduces the cost of system operations.Design for testability (DFT) developed concurrently with system design is an important way to improve PHM capability.Testability modeling and analysis are the foundation of DFT.This paper proposes a novel approach of testability modeling and analysis based on failure evolution mechanisms.At the component level,the fault progression-related information of each unit under test (UUT) in a system is obtained by means of failure modes,evolution mechanisms,effects and criticality analysis (FMEMECA),and then the failure-symptom dependency can be generated.At the system level,the dynamic attributes of UUTs are assigned by using the bond graph methodology,and then the symptom-test dependency can be obtained by means of the functional flow method.Based on the failure-symptom and symptom-test dependencies,testability analysis for PHM systems can be realized.A shunt motor is used to verify the application of the approach proposed in this paper.Experimental results show that this approach is able to be applied to testability modeling and analysis for PHM systems very well,and the analysis results can provide a guide for engineers to design for testability in order to improve PHM performance.

  14. Microscopic and probabilistic approach to thermal steady state based on a dice and coin toy model

    Science.gov (United States)

    Onorato, Pasquale; Malgieri, Massimiliano; Moggio, Lorenzo; Oss, Stefano

    2017-07-01

    In this article we present an educational approach to thermal equilibrium which was tested on a group of 13 undergraduate students at the University of Trento. The approach is based on a stochastic toy model, in which bodies in thermal contact are represented by rows of squares on a cardboard table, which exchange coins placed on the squares based on the roll of two dice. The discussion of several physical principles, such as the exponential approach to equilibrium, the determination of the equilibrium temperature, and the interpretation of the equilibrium state as the most probable macrostate, proceeds through a continual comparison between the outcomes obtained with the toy model and the results of a real experiment on the thermal contact of two masses of water at different temperatures. At the end of the sequence, a re-analysis of the experimental results in view of both the Boltzmann and Clausius definitions of entropy reveals some limits of the toy model, but also allows for a critical discussion of the concepts of temperature and entropy. In order to provide the reader with a feeling of how the sequence was received by students, and how it helped them understand the topics introduced, we discuss some excerpts from their answers to a conceptual item given at the end of the sequence.

  15. A self-consistent first-principle based approach to model carrier mobility in organic materials

    Energy Technology Data Exchange (ETDEWEB)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2015-12-31

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.

  16. Agent-Based Approach for Modelling the Labour Migration from China to Russia

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2017-06-01

    Full Text Available The article describes the process of labour migration from China to Russia and shows its modelling using the agent-based approach. This approach allows us to simulate an artificial society in a computer program taking into account the diversity of individuals under consideration, as well as to model a set of laws and rules of conduct that make up the institutional environment in which the members of this society live. A brief review and analysis of agent-based migration models presented in the foreign literature are given. The agent-based model of labour migration from China to Russia developed by the Central Economic Mathematical Institute of the Russian Academy of Sciences simulates human behaviour close to reality, which is based on their internal purposes, determining the agents choice of territory as a place of residence. Therefore, at the development of the agents of the model and their behaviour algorithms, as well as the organization of the environment in which they exist and interact, the main characteristics of the population of two neighbouring countries and their demographic processes have been considered. Using the model, two experiments have been conducted. The purpose of the first of them was to assess the effect of depreciation of the rubble against the yuan on the overall indexes of labour migration, as well as its structure. In the second experiment, the procedure of the search of the information by agents for the migratory decision-making was changing. Namely, all generalizing information on the average salary by types of activity and skill level of employees, both in China and Russia, became available to all agents irrespective of their qualification level.

  17. A Markov Random Field Model-Based Approach to Natural Image Matting

    Institute of Scientific and Technical Information of China (English)

    Sheng-You Lin; Jiao-Ying Shi

    2007-01-01

    This paper proposes a Markov Random Field (MRF) model-based approach to natural image matting with complex scenes.After the trimap for matting is given manually, the unknown region is roughly segmented into several joint sub-regions.In each sub-region, we partition the colors of neighboring background or foreground pixels into several clusters in RGB color space and assign matting label to each unknown pixel.All the labels are modelled as an MRF and the matting problem is then formulated as a maximum a posteriori (MAP) estimation problem.Simulated annealing is used to find the optimal MAP estimation.The better results can be obtained under the same user-interactions when images are complex.Results of natural image matting experiments performed on complex images using this approach are shown and compared in this paper.

  18. An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.

    Science.gov (United States)

    Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G

    1988-01-01

    An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.

  19. Thermomechanical Modeling of Sintered Silver - A Fracture Mechanics-based Approach: Extended Abstract: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Paret, Paul P [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DeVoto, Douglas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.

  20. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  1. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  2. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  3. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  4. Optimal control based on adaptive model reduction approach to control transfer phenomena

    Science.gov (United States)

    Oulghelou, Mourad; Allery, Cyrille

    2017-01-01

    The purpose of optimal control is to act on a set of parameters characterizing a dynamical system to achieve a target dynamics. In order to reduce CPU time and memory storage needed to perform control on evolution systems, it is possible to use reduced order models (ROMs). The mostly used one is the Proper Orthogonal Decomposition (POD). However the bases constructed in this way are sensitive to the configuration of the dynamical system. Consequently, the need of full simulations to build a basis for each configuration is time consuming and makes that approach still relatively expensive. In this paper, to overcome this difficulty we suggest to use an adequate bases interpolation method. It consists in computing the associated bases to a distribution of control parameters. These bases are afterwards called in the control algorithm to build a reduced basis adapted to a given control parameter. This interpolation method involves results of the calculus of Geodesics on Grassmann manifold.

  5. Formulation of a hybrid calibration approach for a physically based distributed model with NEXRAD data input

    Science.gov (United States)

    Di Luzio, Mauro; Arnold, Jeffrey G.

    2004-10-01

    This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.

  6. Model-based elastography: a survey of approaches to the inverse elasticity problem

    Science.gov (United States)

    Doyley, M M

    2012-01-01

    Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839

  7. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    Science.gov (United States)

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  8. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    Science.gov (United States)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  9. A model-based approach to identify binding sites in CLIP-Seq data.

    Directory of Open Access Journals (Sweden)

    Tao Wang

    Full Text Available Cross-linking immunoprecipitation coupled with high-throughput sequencing (CLIP-Seq has made it possible to identify the targeting sites of RNA-binding proteins in various cell culture systems and tissue types on a genome-wide scale. Here we present a novel model-based approach (MiClip to identify high-confidence protein-RNA binding sites from CLIP-seq datasets. This approach assigns a probability score for each potential binding site to help prioritize subsequent validation experiments. The MiClip algorithm has been tested in both HITS-CLIP and PAR-CLIP datasets. In the HITS-CLIP dataset, the signal/noise ratios of miRNA seed motif enrichment produced by the MiClip approach are between 17% and 301% higher than those by the ad hoc method for the top 10 most enriched miRNAs. In the PAR-CLIP dataset, the MiClip approach can identify ∼50% more validated binding targets than the original ad hoc method and two recently published methods. To facilitate the application of the algorithm, we have released an R package, MiClip (http://cran.r-project.org/web/packages/MiClip/index.html, and a public web-based graphical user interface software (http://galaxy.qbrc.org/tool_runner?tool_id=mi_clip for customized analysis.

  10. Modelling and simulation of complex systems: an approach based on multi-level agents

    Directory of Open Access Journals (Sweden)

    Alain-Jerome Fougeres

    2011-11-01

    Full Text Available A complex system is made up of many components with many interactions. So the design of systems such as simulation systems, cooperative systems or assistance systems includes a very accurate modelling of interactional and communicational levels. The agent-based approach provides an adapted abstraction level for this problem. After having studied the organizational context and communicative capacities of agent-based systems, to simulate the reorganization of a flexible manufacturing, to regulate an urban transport system, and to simulate an epidemic detection system, our thoughts on the interactional level were inspired by human-machine interface models, especially those in cognitive engineering To provide a general framework for agent-based complex systems modelling, we then proposed a scale of four behaviours that agents may adopt in their complex systems (reactive, routine, cognitive, and collective. To complete the description of multi-level agent models, which is the focus of this paper, we illustrate our modelling and discuss our ongoing work on each level.

  11. Mask optimization approaches in optical lithography based on a vector imaging model.

    Science.gov (United States)

    Ma, Xu; Li, Yanqiu; Dong, Lisong

    2012-07-01

    Recently, a set of gradient-based optical proximity correction (OPC) and phase-shifting mask (PSM) optimization methods has been developed to solve for the inverse lithography problem under scalar imaging models, which are only accurate for numerical apertures (NAs) of less than approximately 0.4. However, as lithography technology enters the 45 nm realm, immersion lithography systems with hyper-NA (NA>1) are now extensively used in the semiconductor industry. For the hyper-NA lithography systems, the vector nature of the electromagnetic field must be taken into account, leading to the vector imaging models. Thus, the OPC and PSM optimization approaches developed under the scalar imaging models are inadequate to enhance the resolution in immersion lithography systems. This paper focuses on developing pixelated gradient-based OPC and PSM optimization algorithms under a vector imaging model. We first formulate the mask optimization framework, in which the imaging process of the optical lithography system is represented by an integrative and analytic vector imaging model. A gradient-based algorithm is then used to optimize the mask iteratively. Subsequently, a generalized wavelet penalty is proposed to keep a balance between the mask complexity and convergence errors. Finally, a set of methods is exploited to speed up the proposed algorithms.

  12. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  13. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  14. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  15. Automatically multi-paradigm requirements modeling and analyzing: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    There are several purposes for modeling and analyzing the problem domain before starting the software requirements analysis. First, it focuses on the problem domain, so that the domain users could be involved easily. Secondly, a comprehensive description on the problem domain will advantage getting a comprehensive software requirements model. This paper proposes an ontology-based approach for mod-eling the problem domain. It interacts with the domain users by using terminology that they can under-stand and guides them to provide the relevant information. A multiple paradigm analysis approach, with the basis of the description on the problem domain, has also been presented. Three criteria, i.e. the ra-tionality of organization structure, the achievability of organization goals, and the feasibility of organiza-tion process, have been proposed. The results of the analysis could be used as feedbacks for guiding the domain users to provide further information on the problem domain. And those models on the problem domain could be a kind of document for the pre-requirements analysis phase. They also will be the basis for further software requirements modeling.

  16. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  17. A Reaction-based Diagonalization Approach to Modeling Surface Water Quality

    Science.gov (United States)

    Yu, J.; Yeh, G.; Zhang, F.; Wu, T.; Hu, G.

    2005-12-01

    There are many water quality models (e.g., WASP, QAUL2E/QUAL2K, CE-QUAL-ICM, RCA, RMA11, etc.) that have been employed by practitioners in surface water quality modeling. All of these models are similar to each others. The major differences among them are the number of water quality parameters included and the number of biogeochemical processes considered. Because of the limitation on the number of biogeochemical processes considered and, in a lesser extent, on the number of water quality parameters included, these models often perform only fairly in validation and their predictions may be unreliable, even though they can be adequately calibrated in most occasions and excellently in some occasions. Obviously, there is a need to develop a model that would allow the inclusion of any number of water quality parameters and enable the hypothesis of any number of biogeochemical processes. This paper presents the development of a numerical water quality model using a general paradigm of reaction-based approaches. In a reaction-based approach, all conceptualized biogoechemical processes are transformed into a reaction network. Through the decomposition of species governing equations via Gauss-Jordan column reduction of the reaction network, (1) redundant fast reactions and irrelevant kinetic reactions are removed from the system, which alleviates the problem of unnecessary and erroneous formulation and parameterization of these reactions, and (2) fast reactions and slow reactions are decoupled, which enables robust numerical integrations. The system of species governing equations is transformed into two sets: algebraic equations (either mass action equations or users' specified) of equilibrium variables and differential equations of kinetic variables. As a result, the model alleviates the needs of using simple partitions for fast reactions and uses kinetic-variables instead of biogeochemical species as primary dependent variables. With the diagonalization strategy, it

  18. A framework of vertebra segmentation using the active shape model-based approach.

    Science.gov (United States)

    Benjelloun, Mohammed; Mahmoudi, Saïd; Lecron, Fabian

    2011-01-01

    We propose a medical image segmentation approach based on the Active Shape Model theory. We apply this method for cervical vertebra detection. The main advantage of this approach is the application of a statistical model created after a training stage. Thus, the knowledge and interaction of the domain expert intervene in this approach. Our application allows the use of two different models, that is, a global one (with several vertebrae) and a local one (with a single vertebra). Two modes of segmentation are also proposed: manual and semiautomatic. For the manual mode, only two points are selected by the user on a given image. The first point needs to be close to the lower anterior corner of the last vertebra and the second near the upper anterior corner of the first vertebra. These two points are required to initialize the segmentation process. We propose to use the Harris corner detector combined with three successive filters to carry out the semiautomatic process. The results obtained on a large set of X-ray images are very promising.

  19. A Framework of Vertebra Segmentation Using the Active Shape Model-Based Approach

    Directory of Open Access Journals (Sweden)

    Mohammed Benjelloun

    2011-01-01

    Full Text Available We propose a medical image segmentation approach based on the Active Shape Model theory. We apply this method for cervical vertebra detection. The main advantage of this approach is the application of a statistical model created after a training stage. Thus, the knowledge and interaction of the domain expert intervene in this approach. Our application allows the use of two different models, that is, a global one (with several vertebrae and a local one (with a single vertebra. Two modes of segmentation are also proposed: manual and semiautomatic. For the manual mode, only two points are selected by the user on a given image. The first point needs to be close to the lower anterior corner of the last vertebra and the second near the upper anterior corner of the first vertebra. These two points are required to initialize the segmentation process. We propose to use the Harris corner detector combined with three successive filters to carry out the semiautomatic process. The results obtained on a large set of X-ray images are very promising.

  20. IMC-PID design based on model matching approach and closed-loop shaping.

    Science.gov (United States)

    Jin, Qi B; Liu, Q

    2014-03-01

    Motivated by the limitations of the conventional internal model control (IMC), this communication addresses the design of IMC-based PID in terms of the robust performance of the control system. The IMC controller form is obtained by solving an H-infinity problem based on the model matching approach, and the parameters are determined by closed-loop shaping. The shaping of the closed-loop transfer function is considered both for the set-point tracking and for the load disturbance rejection. The design procedure is formulated as a multi-objective optimization problem which is solved by a specific optimization algorithm. A nice feature of this design method is that it permits a clear tradeoff between robustness and performance. Simulation examples show that the proposed method is effective and has a wide applicability.

  1. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    Science.gov (United States)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  2. Relativistic three-body quark model of light baryons based on hypercentral approach

    Science.gov (United States)

    Aslanzadeh, M.; Rajabi, A. A.

    2015-05-01

    In this paper, we have treated the light baryons as a relativistic three-body bound system. Inspired by lattice QCD calculations, we treated baryons as a spin-independent three-quark system within a relativistic three-quark model based on the three-particle Klein-Gordon equation. We presented the analytical solution of three-body Klein-Gordon equation with employing the constituent quark model based on a hypercentral approach through which two- and three-body forces are taken into account. Herewith the average energy values of the up, down and strange quarks containing multiplets are reproduced. To describe the hyperfine structure of the baryon, the splittings within the SU(6)-multiplets are produced by the generalized Gürsey Radicati mass formula. The considered SU(6)-invariant potential is popular "Coulomb-plus-linear" potential and the strange and non-strange baryons spectra are in general well reproduced.

  3. A new LPV modeling approach using PCA-based parameter set mapping to design a PSS.

    Science.gov (United States)

    Jabali, Mohammad B Abolhasani; Kazemi, Mohammad H

    2017-01-01

    This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV) approach using parameter set mapping with principle component analysis (PCA). An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI) region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.

  4. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  5. Modeling for (physical) biologists: an introduction to the rule-based approach.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-07-16

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  6. Modeling for (physical) biologists: an introduction to the rule-based approach

    Science.gov (United States)

    Chylek, Lily A.; Harris, Leonard A.; Faeder, James R.; Hlavacek, William S.

    2015-07-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  7. [An annotation approach for masto-calcifications based on semantic model].

    Science.gov (United States)

    Zhao, Kexin; Song, Lixin

    2012-02-01

    To realize the medical semantic annotation of mammogram, a semantic modeling approach for micro-calcifications in mammogram based on hierarchical Bayesian network (BN) was proposed. Firstly, support vector machines (SVM) were used to map low-level image feature into feature semantics, then high-level semantic was captured through fusing the feature semantics using BN. Finally semantic model was established. To validate the method, the model was applied to annotate the semantic information of mammograms. In this experiment, 142 images were chosen as training set and 50 images as testing set. The results showed that the accuracy of malignant samples was 81.48%, and that of benign samples was 73.91%.

  8. A Modeling Approach based on UML/MARTE for GPU Architecture

    CERN Document Server

    Rodrigues, Antonio Wendell De Oliveira; Dekeyser, Jean-Luc

    2011-01-01

    Nowadays, the High Performance Computing is part of the context of embedded systems. Graphics Processing Units (GPUs) are more and more used in acceleration of the most part of algorithms and applications. Over the past years, not many efforts have been done to describe abstractions of applications in relation to their target architectures. Thus, when developers need to associate applications and GPUs, for example, they find difficulty and prefer using API for these architectures. This paper presents a metamodel extension for MARTE profile and a model for GPU architectures. The main goal is to specify the task and data allocation in the memory hierarchy of these architectures. The results show that this approach will help to generate code for GPUs based on model transformations using Model Driven Engineering (MDE).

  9. Modeling Microbial Biogeochemistry from Terrestrial to Aquatic Ecosystems Using Trait-Based Approaches

    Science.gov (United States)

    King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.; Brodie, E.

    2014-12-01

    Currently, there is uncertainty in how climate or land-use-induced changes in hydrology and vegetation will affect subsurface carbon flux, the spatial and temporal distribution of flow and transport, biogeochemical cycling, and microbial metabolic activity. Here we focus on the initial development of a Genome-Enabled Watershed Simulation Capability (GEWaSC), which provides a predictive framework for understanding how genomic information stored in a subsurface microbiome affects biogeochemical watershed functioning, how watershed-scale processes affect microbial function, and how these interactions co-evolve. This multiscale framework builds on a hierarchical approach to multiscale modeling, which considers coupling between defined microscale and macroscale components of a system (e.g., a catchment being defined as macroscale and biogeofacies as microscale). Here, we report our progress in the development of a trait-based modeling approach within a reactive transport framework that simulates coupled guilds of microbes. Guild selection is driven by traits extracted from, and physiological properties inferred from, large-scale assembly of metagenome data. Meta-genomic, -transcriptomic and -proteomic information are also used to complement our existing biogeochemical reaction networks and contributes key reactions where biogeochemical analyses are unequivocal. Our approach models the rate of nutrient uptake and the thermodynamics of coupled electron donors and acceptors for a range of microbial metabolisms including heterotrophs and chemolitho(auto)trophs. Metabolism of exogenous substrates fuels catabolic and anabolic processes, with the proportion of energy used for each based upon dynamic intracellular and environmental conditions. In addition to biomass development, anabolism includes the production of key enzymes, such as nitrogenase for nitrogen fixation or exo-enzymes for the hydrolysis of extracellular polymers. This internal resource partitioning represents a

  10. Iterative approach to modeling subsurface stormflow based on nonlinear, hillslope-scale physics

    Directory of Open Access Journals (Sweden)

    J. H. Spaaks

    2009-08-01

    Full Text Available Soil water transport in small, humid, upland catchments is often dominated by subsurface stormflow. Recent studies of this process suggest that at the plot scale, generation of transient saturation may be governed by threshold behavior, and that transient saturation is a prerequisite for lateral flow. The interaction between these plot scale processes yields complex behavior at the hillslope scale. We argue that this complexity should be incorporated into our models. We take an iterative approach to developing our model, starting with a very simple representation of hillslope rainfall-runoff. Next, we design new virtual experiments with which we test our model, while adding more structural complexity. In this study, we present results from three such development cycles, corresponding to three different hillslope-scale, lumped models. Model1 is a linear tank model, which assumes transient saturation to be homogeneously distributed over the hillslope. Model2 assumes transient saturation to be heterogeneously distributed over the hillslope, and that the spatial distribution of the saturated zone does not vary with time. Model3 assumes that transient saturation is heterogeneous both in space and in time. We found that the homogeneity assumption underlying Model1 resulted in hillslope discharge being too steep during the first part of the rising limb, but not steep enough on the second part. Also, peak height was underestimated. The additional complexity in Model2 improved the simulations in terms of the fit, but not in terms of the dynamics. The threshold-based Model3 captured most of the hydrograph dynamics (Nash-Sutcliffe efficiency of 0.98. After having assessed our models in a lumped setup, we then compared Model1 to Model3 in a spatially explicit setup, and evaluated what patterns of subsurface flow were possible with model elements of each type. We found

  11. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  12. Modeling the Ductile Brittle Fracture Transition in Reactor Pressure Vessel Steels using a Cohesive Zone Model based approach

    Energy Technology Data Exchange (ETDEWEB)

    Pritam Chakraborty; S. Bulent Biner

    2013-10-01

    Fracture properties of Reactor Pressure Vessel (RPV) steels show large variations with changes in temperature and irradiation levels. Brittle behavior is observed at lower temperatures and/or higher irradiation levels whereas ductile mode of failure is predominant at higher temperatures and/or lower irradiation levels. In addition to such temperature and radiation dependent fracture behavior, significant scatter in fracture toughness has also been observed. As a consequence of such variability in fracture behavior, accurate estimates of fracture properties of RPV steels are of utmost importance for safe and reliable operation of reactor pressure vessels. A cohesive zone based approach is being pursued in the present study where an attempt is made to obtain a unified law capturing both stable crack growth (ductile fracture) and unstable failure (cleavage fracture). The parameters of the constitutive model are dependent on both temperature and failure probability. The effect of irradiation has not been considered in the present study. The use of such a cohesive zone based approach would allow the modeling of explicit crack growth at both stable and unstable regimes of fracture. Also it would provide the possibility to incorporate more physical lower length scale models to predict DBT. Such a multi-scale approach would significantly improve the predictive capabilities of the model, which is still largely empirical.

  13. COMPETENCE-BASED APPROACH TO MODELLING STRUCTURES OF THE MAIN EDUCATIONAL PROGRAM

    Directory of Open Access Journals (Sweden)

    V. A. Gerasimova

    2015-01-01

    Full Text Available By the analysis results of scientific works in the field of competence-based approach in education authors proved need of computer support of the planning and development stage of the main educational program, they developed the main educational program structure automatic formation model on the graphs basis, offered the integrated criterion of an discipline assessment and developed a strategic map of a discipline complex assessment. The executed theoretical researches are a basis for creation of the main educational program planning and development support automated system.

  14. The Effects of a Model-Based Physics Curriculum Program with a Physics First Approach: A Causal-Comparative Study

    Science.gov (United States)

    Liang, Ling L.; Fulmer, Gavin W.; Majerich, David M.; Clevenstine, Richard; Howanski, Raymond

    2012-01-01

    The purpose of this study is to examine the effects of a model-based introductory physics curriculum on conceptual learning in a Physics First (PF) Initiative. This is the first comparative study in physics education that applies the Rasch modeling approach to examine the effects of a model-based curriculum program combined with PF in the United…

  15. Estimating impacts of climate change policy on land use: an agent-based modelling approach.

    Science.gov (United States)

    Morgan, Fraser J; Daigneault, Adam J

    2015-01-01

    Agriculture is important to New Zealand's economy. Like other primary producers, New Zealand strives to increase agricultural output while maintaining environmental integrity. Utilising modelling to explore the economic, environmental and land use impacts of policy is critical to understand the likely effects on the sector. Key deficiencies within existing land use and land cover change models are the lack of heterogeneity in farmers and their behaviour, the role that social networks play in information transfer, and the abstraction of the global and regional economic aspects within local-scale approaches. To resolve these issues we developed the Agent-based Rural Land Use New Zealand model. The model utilises a partial equilibrium economic model and an agent-based decision-making framework to explore how the cumulative effects of individual farmer's decisions affect farm conversion and the resulting land use at a catchment scale. The model is intended to assist in the development of policy to shape agricultural land use intensification in New Zealand. We illustrate the model, by modelling the impact of a greenhouse gas price on farm-level land use, net revenue, and environmental indicators such as nutrient losses and soil erosion for key enterprises in the Hurunui and Waiau catchments of North Canterbury in New Zealand. Key results from the model show that farm net revenue is estimated to increase over time regardless of the greenhouse gas price. Net greenhouse gas emissions are estimated to decline over time, even under a no GHG price baseline, due to an expansion of forestry on low productivity land. Higher GHG prices provide a greater net reduction of emissions. While social and geographic network effects have minimal impact on net revenue and environmental outputs for the catchment, they do have an effect on the spatial arrangement of land use and in particular the clustering of enterprises.

  16. Estimating impacts of climate change policy on land use: an agent-based modelling approach.

    Directory of Open Access Journals (Sweden)

    Fraser J Morgan

    Full Text Available Agriculture is important to New Zealand's economy. Like other primary producers, New Zealand strives to increase agricultural output while maintaining environmental integrity. Utilising modelling to explore the economic, environmental and land use impacts of policy is critical to understand the likely effects on the sector. Key deficiencies within existing land use and land cover change models are the lack of heterogeneity in farmers and their behaviour, the role that social networks play in information transfer, and the abstraction of the global and regional economic aspects within local-scale approaches. To resolve these issues we developed the Agent-based Rural Land Use New Zealand model. The model utilises a partial equilibrium economic model and an agent-based decision-making framework to explore how the cumulative effects of individual farmer's decisions affect farm conversion and the resulting land use at a catchment scale. The model is intended to assist in the development of policy to shape agricultural land use intensification in New Zealand. We illustrate the model, by modelling the impact of a greenhouse gas price on farm-level land use, net revenue, and environmental indicators such as nutrient losses and soil erosion for key enterprises in the Hurunui and Waiau catchments of North Canterbury in New Zealand. Key results from the model show that farm net revenue is estimated to increase over time regardless of the greenhouse gas price. Net greenhouse gas emissions are estimated to decline over time, even under a no GHG price baseline, due to an expansion of forestry on low productivity land. Higher GHG prices provide a greater net reduction of emissions. While social and geographic network effects have minimal impact on net revenue and environmental outputs for the catchment, they do have an effect on the spatial arrangement of land use and in particular the clustering of enterprises.

  17. Modeling of Fatigue Crack Propagation in Aluminum Alloys Using an Energy Based Approach

    Directory of Open Access Journals (Sweden)

    F. Khelil

    2013-08-01

    Full Text Available Materials fatigue is a particularly serious and unsafe kind of material destruction. Investigations of the fatigue crack growth rate and fatigue life constitute very important and complex problems in mechanics. The understanding of the cracking mechanisms, taking into account various factors such as the load pattern, the strain rate, the stress ratio, etc., is of a first need. In this work an energy approach of the Fatigue Crack Growth (FCG was proposed. This approach is based on the numerical determination of the plastic zone by introducing a novel form of plastic radius. The experimental results conducted on two aluminum alloys of types 2024-T351 and 7075-T7351 were exploited to validate the developed numerical model. A good agreement has been found between the two types of results.

  18. Comparing Simulation Output Accuracy of Discrete Event and Agent Based Models: A Quantitive Approach

    CERN Document Server

    Majid, Mazlina Abdul; Siebers, Peer-Olaf

    2010-01-01

    In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methids. In a second step a multi-scenario experimen...

  19. Developing scenarios to assess future landslide risks: a model-based approach applied to mountainous regions

    Science.gov (United States)

    Vacquie, Laure; Houet, Thomas

    2016-04-01

    In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging

  20. Alternative 3D Modeling Approaches Based on Complex Multi-Source Geological Data Interpretation

    Institute of Scientific and Technical Information of China (English)

    李明超; 韩彦青; 缪正建; 高伟

    2014-01-01

    Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological struc-ture through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and inte-grated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline (NURBS) technique is then pre-sented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.

  1. Availability modeling approach for future circular colliders based on the LHC operation experience

    Science.gov (United States)

    Niemi, Arto; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo

    2016-12-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today's most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10 - 20 ab-1 of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. The approach is based on best-practice, industrially applied reliability analysis methods. It relies on failure rate and repair time distributions to calculate impacts on availability. The main source of information for the study is coming from CERN accelerator operation and maintenance data. Recent improvements in LHC failure tracking help improving the accuracy of modeling of LHC performance. The model accuracy and prediction capabilities are discussed by comparing obtained results with past LHC operational data.

  2. A linearization approach for the model-based analysis of combined aggregate and individual patient data.

    Science.gov (United States)

    Ravva, Patanjali; Karlsson, Mats O; French, Jonathan L

    2014-04-30

    The application of model-based meta-analysis in drug development has gained prominence recently, particularly for characterizing dose-response relationships and quantifying treatment effect sizes of competitor drugs. The models are typically nonlinear in nature and involve covariates to explain the heterogeneity in summary-level literature (or aggregate data (AD)). Inferring individual patient-level relationships from these nonlinear meta-analysis models leads to aggregation bias. Individual patient-level data (IPD) are indeed required to characterize patient-level relationships but too often this information is limited. Since combined analyses of AD and IPD allow advantage of the information they share to be taken, the models developed for AD must be derived from IPD models; in the case of linear models, the solution is a closed form, while for nonlinear models, closed form solutions do not exist. Here, we propose a linearization method based on a second order Taylor series approximation for fitting models to AD alone or combined AD and IPD. The application of this method is illustrated by an analysis of a continuous landmark endpoint, i.e., change from baseline in HbA1c at week 12, from 18 clinical trials evaluating the effects of DPP-4 inhibitors on hyperglycemia in diabetic patients. The performance of this method is demonstrated by a simulation study where the effects of varying the degree of nonlinearity and of heterogeneity in covariates (as assessed by the ratio of between-trial to within-trial variability) were studied. A dose-response relationship using an Emax model with linear and nonlinear effects of covariates on the emax parameter was used to simulate data. The simulation results showed that when an IPD model is simply used for modeling AD, the bias in the emax parameter estimate increased noticeably with an increasing degree of nonlinearity in the model, with respect to covariates. When using an appropriately derived AD model, the linearization

  3. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2013-02-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS code. The new code (TRIGRS-P adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying

  4. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  5. Event-Based Modeling of Driver Yielding Behavior to Pedestrians at Two-Lane Roundabout Approaches.

    Science.gov (United States)

    Salamati, Katayoun; Schroeder, Bastian J; Geruschat, Duane R; Rouphail, Nagui M

    2014-01-01

    Unlike other types of controlled intersections, drivers do not always comply with the "yield to pedestrian" sign at the roundabouts. This paper aims to identify the contributing factors affecting the likelihood of driver yielding to pedestrians at two-lane roundabouts. It further models the likelihood of driver yielding based on these factors using logistic regression. The models have been applied to 1150 controlled pedestrian crossings at entry and exit legs of two-lane approaches of six roundabouts across the country. The logistic regression models developed support prior research that the likelihood of driver yielding at the entry leg of roundabouts is higher than at the exit. Drivers tend to yield to pedestrians carrying a white cane more often than to sighted pedestrians. Drivers traveling in the far lane, relative to pedestrian location, have a lower probability of yielding to a pedestrian. As the speed increases the probability of driver yielding decreases. At the exit leg of the roundabout, drivers turning right from the adjacent lane have a lower propensity of yielding than drivers coming from other directions. The findings of this paper further suggest that although there has been much debate on pedestrian right-of-way laws and distinction between pedestrian waiting positions (in the street versus at the curb), this factor does not have a significant impact on driver yielding rate. The logistic regression models also quantify the effect of each of these factors on propensity of driver yielding. The models include variables which are specific to each study location and explain the impact size of each study location on probability of yielding. The models generated in this research will be useful to transportation professionals and researchers interested in understanding the factors that impact driver yielding at modern roundabouts. The results of the research can be used to isolate factors that may increase yielding (such as lower roundabout approach speeds

  6. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  7. A Tree-based Approach for Modelling Interception Loss From Evergreen Oak Mediterranean Savannas

    Science.gov (United States)

    Pereira, Fernando L.; Gash, John H. C.; David, Jorge S.; David, Teresa S.; Monteiro, Paulo R.; Valente, Fernanda

    2010-05-01

    woodlands in southern Portugal. For both sites, simulated interception loss agreed well with the observations indicating the adequacy of this new methodology for modelling interception loss by isolated trees in savanna-type ecosystems. Furthermore, the proposed approach is physically based and requires only a limited amount of data. Interception loss for the entire forest can be estimated by scaling up the evaporation from individual trees accounting for the number of trees per unit area.

  8. Loss Modeling with a Data-Driven Approach in Event-Based Rainfall-Runoff Analysis

    Science.gov (United States)

    Chua, L. H. C.

    2012-04-01

    Mathematical models require the estimation of rainfall abstractions for accurate predictions of runoff. Although loss models such as the constant loss and exponential loss models are commonly used, these methods are based on simplified assumptions of the physical process. A new approach based on the data driven paradigm to estimate rainfall abstractions is proposed in this paper. The proposed data driven model, based on the artificial neural network (ANN) does not make any assumptions on the loss behavior. The estimated discharge from a physically-based model, obtained from the kinematic wave (KW) model assuming zero losses, was used as the only input to the ANN. The output is the measured discharge. Thus, the ANN functions as a black-box loss model. Two sets of data were analyzed for this study. The first dataset consists of rainfall and runoff data, measured from an artificial catchment (area = 25 m2) comprising two overland planes (slope = 11%), 25m long, transversely inclined towards a rectangular channel (slope = 2%) which conveyed the flow, recorded using calibrated weigh tanks, to the outlet. Two rain gauges, each placed 6.25 m from either ends of the channel, were used to record rainfall. Data for six storm events over the period between October 2002 and December 2002 were analyzed. The second dataset was obtained from the Upper Bukit Timah catchment (area = 6.4 km2) instrumented with two rain gauges and a flow measuring station. A total of six events recorded between November 1987 and July 1988 were selected for this study. The runoff predicted by the ANN was compared with the measured runoff. In addition, results from KW models developed for both the catchments were used as a benchmark. The KW models were calibrated assuming the loss rate for an average event for each of the datasets. The results from both the ANN and KW models agreed well with the runoff measured from the artificial catchment. The KW model is expected to perform well since the catchment

  9. Using the Dynamic Model to develop an evidence-based and theory-driven approach to school improvement

    NARCIS (Netherlands)

    Creemers, B.P.M.; Kyriakides, L.

    2010-01-01

    This paper refers to a dynamic perspective of educational effectiveness and improvement stressing the importance of using an evidence-based and theory-driven approach. Specifically, an approach to school improvement based on the dynamic model of educational effectiveness is offered. The recommended

  10. Developing a physiologically based approach for modeling plutonium decorporation therapy with DTPA.

    Science.gov (United States)

    Kastl, Manuel; Giussani, Augusto; Blanchardon, Eric; Breustedt, Bastian; Fritsch, Paul; Hoeschen, Christoph; Lopez, Maria Antonia

    2014-11-01

    To develop a physiologically based compartmental approach for modeling plutonium decorporation therapy with the chelating agent Diethylenetriaminepentaacetic acid (Ca-DTPA/Zn-DTPA). Model calculations were performed using the software package SAAM II (©The Epsilon Group, Charlottesville, Virginia, USA). The Luciani/Polig compartmental model with age-dependent description of the bone recycling processes was used for the biokinetics of plutonium. The Luciani/Polig model was slightly modified in order to account for the speciation of plutonium in blood and for the different affinities for DTPA of the present chemical species. The introduction of two separate blood compartments, describing low-molecular-weight complexes of plutonium (Pu-LW) and transferrin-bound plutonium (Pu-Tf), respectively, and one additional compartment describing plutonium in the interstitial fluids was performed successfully. The next step of the work is the modeling of the chelation process, coupling the physiologically modified structure with the biokinetic model for DTPA. RESULTS of animal studies performed under controlled conditions will enable to better understand the principles of the involved mechanisms.

  11. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  12. Model-based system engineering approach for the Euclid mission to manage scientific and technical complexity

    Science.gov (United States)

    Lorenzo Alvarez, Jose; Metselaar, Harold; Amiaux, Jerome; Saavedra Criado, Gonzalo; Gaspar Venancio, Luis M.; Salvignol, Jean-Christophe; Laureijs, René J.; Vavrek, Roland

    2016-08-01

    In the last years, the system engineering field is coming to terms with a paradigm change in the approach for complexity management. Different strategies have been proposed to cope with highly interrelated systems, system of systems and collaborative system engineering have been proposed and a significant effort is being invested into standardization and ontology definition. In particular, Model Based System Engineering (MBSE) intends to introduce methodologies for a systematic system definition, development, validation, deployment, operation and decommission, based on logical and visual relationship mapping, rather than traditional 'document based' information management. The practical implementation in real large-scale projects is not uniform across fields. In space science missions, the usage has been limited to subsystems or sample projects with modeling being performed 'a-posteriori' in many instances. The main hurdle for the introduction of MBSE practices in new projects is still the difficulty to demonstrate their added value to a project and whether their benefit is commensurate with the level of effort required to put them in place. In this paper we present the implemented Euclid system modeling activities, and an analysis of the benefits and limitations identified to support in particular requirement break-down and allocation, and verification planning at mission level.

  13. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    Science.gov (United States)

    ten Eikelder, M. F. P.; Daude, F.; Koren, B.; Tijsseling, A. S.

    2017-02-01

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splitting approach. The results are in good agreement with reference results and exact solutions.

  14. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Daude, F. [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); IMSIA, UMR EDF-CNRS-CEA-ENSTA 9219, Université Paris Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau (France); Koren, B.; Tijsseling, A.S. [Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2017-02-15

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splitting approach. The results are in good agreement with reference results and exact solutions.

  15. A cell-based computational modeling approach for developing site-directed molecular probes.

    Directory of Open Access Journals (Sweden)

    Jing-Yu Yu

    Full Text Available Modeling the local absorption and retention patterns of membrane-permeant small molecules in a cellular context could facilitate development of site-directed chemical agents for bioimaging or therapeutic applications. Here, we present an integrative approach to this problem, combining in silico computational models, in vitro cell based assays and in vivo biodistribution studies. To target small molecule probes to the epithelial cells of the upper airways, a multiscale computational model of the lung was first used as a screening tool, in silico. Following virtual screening, cell monolayers differentiated on microfabricated pore arrays and multilayer cultures of primary human bronchial epithelial cells differentiated in an air-liquid interface were used to test the local absorption and intracellular retention patterns of selected probes, in vitro. Lastly, experiments involving visualization of bioimaging probe distribution in the lungs after local and systemic administration were used to test the relevance of computational models and cell-based assays, in vivo. The results of in vivo experiments were consistent with the results of in silico simulations, indicating that mitochondrial accumulation of membrane permeant, hydrophilic cations can be used to maximize local exposure and retention, specifically in the upper airways after intratracheal administration.

  16. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  17. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  18. Physiologically corrected coupled motion during gait analysis using a model-based approach.

    Science.gov (United States)

    Bonnechère, Bruno; Sholukha, Victor; Salvia, Patrick; Rooze, Marcel; Van Sint Jan, Serge

    2015-01-01

    Gait analysis is used in daily clinics for patients' evaluation and follow-up. Stereophotogrammetric devices are the most used tool to perform these analyses. Although these devices are accurate results must be analyzed carefully due to relatively poor reproducibility. One of the major issues is related to skin displacement artifacts. Motion representation is recognized reliable for the main plane of motion displacement, but secondary motions, or combined, are less reliable because of the above artifacts. Model-based approach (MBA) combining accurate joint kinematics and motion data was previously developed based on a double-step registration method. This study presents an extensive validation of this MBA method by comparing results with a conventional motion representation model. Thirty five healthy subjects participated to this study. Gait motion data were obtained from a stereophotogrammetric system. Plug-in Gait model (PiG) and MBA were applied to raw data, results were then compared. Range-of-motion, were computed for pelvis, hip, knee and ankle joints. Differences between PiG and MBA were then computed. Paired-sample t-tests were used to compare both methods. Normalized root-mean square errors were also computed. Shapes of the curves were compared using coefficient of multiple correlations. The MBA and PiG approaches shows similar results for the main plane of motion displacement but statistically significative discrepancies appear for the combined motions. MBA appear to be usable in applications (such as musculoskeletal modeling) requesting better approximations of the joints-of-interest thanks to the integration of validated joint mechanisms.

  19. A model-based approach to associate complexity and robustness in engineering systems

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; D. Frey, Daniel; Howard, Thomas J.

    2017-01-01

    Ever increasing functionality and complexity of products and systems challenge development companies in achieving high and consistent quality. A model-based approach is used to investigate the relationship between system complexity and system robustness. The measure for complexity is based...... on the degree of functional coupling and the level of contradiction in the couplings. Whilst Suh’s independence axiom states that functional independence (uncoupled designs) produces more robust designs, this study proves this not to be the case for max-/min-is-best requirements, and only to be true......-is-best requirements, the robustness is most affected by the level of contradiction between coupled functional requirements (p = 1.4e−36). In practice, the results imply that if the main influencing factors for each function in a system are known in the concept phase, an evaluation of the contradiction level can...

  20. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  1. A Fuzzy Set-Based Approach for Model-Based Internet-Banking System Security Risk Assessment

    Institute of Scientific and Technical Information of China (English)

    LI Hetian; LIU Yun; HE Dequan

    2006-01-01

    A fuzzy set-based evaluation approach is demonstrated to assess the security risks for Internet-banking System. The Internet-banking system is semi-formally described using Unified Modeling Language (UML) to specify the behavior and state of the system on the base of analyzing the existing qualitative risk assessment methods. And a quantitative method based on fuzzy set is used to measure security risks of the system. A case study was performed on the WEB server of the Internet-banking System using fuzzy-set based assessment algorithm to quantitatively compute the security risk severity. The numeric result also provides a method to decide the most critical component which should arouse the system administrator enough attention to take the appropriate security measure or controls to alleviate the risk severity. The experiments show this method can be used to quantify the security properties for the Internet-banking System in practice.

  2. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  3. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    Science.gov (United States)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  4. A Flexible Approach to Modelling Adaptive Course Sequencing based on Graphs implemented using XLink

    Directory of Open Access Journals (Sweden)

    Rachid ELOUAHBI

    2012-02-01

    Full Text Available A major challenge in developing systems of distance learning is the ability to adapt learning to individual users. This adaptation requires a flexible scheme for sequencing the material to teach diverse learners. This is where we intend to contribute to model the personalized learning paths to be followed by the learner to achieve his/her determined educational objective. Our modelling approach of sequencing is based on the pedagogical graph which is called SMARTGraph. This graph allows expressing the totality of the pedagogic constraints under which the learner is submitted in order to achieve his/her pedagogic objective. SMARTGraph is a graph in which the nodes are the learning units and the arcs are the pedagogic constraints between learning units. We shall see how it is possible to organize the learning units and the learning paths to answer the expectations within the framework of individual courses according to the learner profile or within the framework of group courses. To implement our approach we exploit the strength of XLink (XML Linking Language to define the sequencing graph.

  5. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  6. A Simple Model-Based Approach to Inferring and Visualizing Cancer Mutation Signatures.

    Science.gov (United States)

    Shiraishi, Yuichi; Tremmel, Georg; Miyano, Satoru; Stephens, Matthew

    2015-12-01

    Recent advances in sequencing technologies have enabled the production of massive amounts of data on somatic mutations from cancer genomes. These data have led to the detection of characteristic patterns of somatic mutations or "mutation signatures" at an unprecedented resolution, with the potential for new insights into the causes and mechanisms of tumorigenesis. Here we present new methods for modelling, identifying and visualizing such mutation signatures. Our methods greatly simplify mutation signature models compared with existing approaches, reducing the number of parameters by orders of magnitude even while increasing the contextual factors (e.g. the number of flanking bases) that are accounted for. This improves both sensitivity and robustness of inferred signatures. We also provide a new intuitive way to visualize the signatures, analogous to the use of sequence logos to visualize transcription factor binding sites. We illustrate our new method on somatic mutation data from urothelial carcinoma of the upper urinary tract, and a larger dataset from 30 diverse cancer types. The results illustrate several important features of our methods, including the ability of our new visualization tool to clearly highlight the key features of each signature, the improved robustness of signature inferences from small sample sizes, and more detailed inference of signature characteristics such as strand biases and sequence context effects at the base two positions 5' to the mutated site. The overall framework of our work is based on probabilistic models that are closely connected with "mixed-membership models" which are widely used in population genetic admixture analysis, and in machine learning for document clustering. We argue that recognizing these relationships should help improve understanding of mutation signature extraction problems, and suggests ways to further improve the statistical methods. Our methods are implemented in an R package pmsignature (https

  7. Network on Chip: a New Approach of QoS Metric Modeling Based on Calculus Theory

    Directory of Open Access Journals (Sweden)

    Salem NASRI

    2011-10-01

    Full Text Available According to ITRS, in 2018, ICs will be able to integrate billions of transistors, with feature sizes around 18 nm and clock frequencies near to 10 GHz. In this context, Network on Chip (NoC appears as an attractive solution to implement future high performance networks and more QoS management. A NoC is composed by IP cores (Intellectual Propriety and switches connected among themselves by communicationchannels. End-to-End Delay (EED communication is accomplished by the exchange of data among IP cores.Often, the structure of particular messages is not adequate for the communication purposes. This leads to the concept of packet switching. In the context of NoCs, packets are composed by header, payload, and trailer. Packets are divided into small pieces called Flits. It appears of importance, to meet the required performance in NoC hardware resources. It should be specified in an earlier step of the system design. The main attention should be given to the choice of some network parameters such as the physical buffer size in the node. The EED and packet loss are some of the critical QoS metrics. Some real-time and multimedia applications bound up these parameters and require specific hardware resources and particular management approaches in the NoC switch.A traffic contract (SLA, Service Level Agreement specifies the ability of a network or protocol to give guaranteed performance, throughput or latency bounds based on mutually agreed measures, usually by prioritizing traffic. A defined Quality of Service (QoS may be required for some types of network real time traffic or multimedia applications. The main goal of this paper is, using the Network on Chip modeling architecture, to define a QoS metric. We focus on the network delay bound and packet losses. This approach is based on the Network Calculus theory, a mathematical model to represent the data flows behavior between IPs interconnected over NoC.We propose an approach of QoS-metric based on Qo

  8. Urban Energy Simulation Based on 3d City Models: a Service-Oriented Approach

    Science.gov (United States)

    Wate, P.; Rodrigues, P.; Duminil, E.; Coors, V.

    2016-09-01

    Recent advancements in technology has led to the development of sophisticated software tools revitalizing growth in different domains. Taking advantage of this trend, urban energy domain have developed several compute intensive physical and data driven models. These models are used in various distinct simulation softwares to simulate the whole life-cycle of energy flow in cities from supply, distribution, conversion, storage and consumption. Since some simulation software target a specific energy system, it is necessary to integrate them to predict present and future urban energy needs. However, a key drawback is that, these tools are not compatible with each other as they use custom or propriety formats. Furthermore, they are designed as desktop applications and cannot be easily integrated with third-party tools (open source or commercial). Thereby, missing out on potential model functionalities which are required for sustainable urban energy management. In this paper, we propose a solution based on Service Oriented Architecture (SOA). Our approach relies on open interfaces to offer flexible integration of modelling and computational functionality as loosely coupled distributed services.

  9. APPROACH TO SYNTHESIS OF PASSIVE INFRARED DETECTORS BASED ON QUASI-POINT MODEL OF QUALIFIED INTRUDER

    Directory of Open Access Journals (Sweden)

    I. V. Bilizhenko

    2017-01-01

    Full Text Available Subject of Research. The paper deals with synthesis of passive infra red (PIR detectors with enhanced detection capability of qualified intruder who uses different types of detection countermeasures: the choice of specific movement direction and disguise in infrared band. Methods. We propose an approach based on quasi-point model of qualified intruder. It includes: separation of model priority parameters, formation of partial detection patterns adapted to those parameters and multi channel signal processing. Main Results. Quasi-pointmodel of qualified intruder consisting of different fragments was suggested. Power density difference was used for model parameters estimation. Criteria were formulated for detection pattern parameters choice on the basis of model parameters. Pyroelectric sensor with nine sensitive elements was applied for increasing the signal information content. Multi-channel processing with multiple partial detection patterns was proposed optimized for detection of intruder's specific movement direction. Practical Relevance. Developed functional device diagram can be realized both by hardware and software and is applicable as one of detection channels for dual technology passive infrared and microwave detectors.

  10. Forward and Reverse Process Models for the Squeeze Casting Process Using Neural Network Based Approaches

    Directory of Open Access Journals (Sweden)

    Manjunath Patel Gowdru Chandrashekarappa

    2014-01-01

    Full Text Available The present research work is focussed to develop an intelligent system to establish the input-output relationship utilizing forward and reverse mappings of artificial neural networks. Forward mapping aims at predicting the density and secondary dendrite arm spacing (SDAS from the known set of squeeze cast process parameters such as time delay, pressure duration, squeezes pressure, pouring temperature, and die temperature. An attempt is also made to meet the industrial requirements of developing the reverse model to predict the recommended squeeze cast parameters for the desired density and SDAS. Two different neural network based approaches have been proposed to carry out the said task, namely, back propagation neural network (BPNN and genetic algorithm neural network (GA-NN. The batch mode of training is employed for both supervised learning networks and requires huge training data. The requirement of huge training data is generated artificially at random using regression equation derived through real experiments carried out earlier by the same authors. The performances of BPNN and GA-NN models are compared among themselves with those of regression for ten test cases. The results show that both models are capable of making better predictions and the models can be effectively used in shop floor in selection of most influential parameters for the desired outputs.

  11. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  12. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. In contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.

  13. A taxonomy-based approach to shed light on the babel of mathematical models for rice simulation

    NARCIS (Netherlands)

    Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Françoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel; Fumoto, Tamon; Gaydon, Donald; Lafarge, Tanguy; Marcaida, Manuel; Nakagawa, Hiroshi; Ruane, Alex C.; Singh, Balwinder; Singh, Upendra; Tang, Liang; Tao, Fulu; Fugice, Job; Yoshida, Hiroe; Zhang, Zhao; Wilson, Lloyd T.; Baker, Jeff; Yang, Yubin; Masutomi, Yuji; Wallach, Daniel; Acutis, Marco; Bouman, Bas

    2016-01-01

    For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary

  14. Diffusion of a Sustainable Farming Technique in Sri Lanka: An Agent-Based Modeling Approach

    Science.gov (United States)

    Jacobi, J. H.; Gilligan, J. M.; Carrico, A. R.; Truelove, H. B.; Hornberger, G.

    2012-12-01

    We live in a changing world - anthropogenic climate change is disrupting historic climate patterns and social structures are shifting as large scale population growth and massive migrations place unprecedented strain on natural and social resources. Agriculture in many countries is affected by these changes in the social and natural environments. In Sri Lanka, rice farmers in the Mahaweli River watershed have seen increases in temperature and decreases in precipitation. In addition, a government led resettlement project has altered the demographics and social practices in villages throughout the watershed. These changes have the potential to impact rice yields in a country where self-sufficiency in rice production is a point of national pride. Studies of the climate can elucidate physical effects on rice production, while research on social behaviors can illuminate the influence of community dynamics on agricultural practices. Only an integrated approach, however, can capture the combined and interactive impacts of these global changes on Sri Lankan agricultural. As part of an interdisciplinary team, we present an agent-based modeling (ABM) approach to studying the effects of physical and social changes on farmers in Sri Lanka. In our research, the diffusion of a sustainable farming technique, the system of rice intensification (SRI), throughout a farming community is modeled to identify factors that either inhibit or promote the spread of a more sustainable approach to rice farming. Inputs into the ABM are both physical and social and include temperature, precipitation, the Palmer Drought Severity Index (PDSI), community trust, and social networks. Outputs from the ABM demonstrate the importance of meteorology and social structure on the diffusion of SRI throughout a farming community.

  15. GIS based site and structure selection model for groundwater recharge: a hydrogeomorphic approach.

    Science.gov (United States)

    Vijay, Ritesh; Sohony, R A

    2009-10-01

    The groundwater in India is facing a critical situation due to over exploitation, reduction in recharge potential by change in land use and land cover and improper planning and management. A groundwater development plan needs a large volume of multidisciplinary data from various sources. A geographic information system (GIS) based hydrogeomorphic approach can provide the appropriate platform for spatial analysis of diverse data sets for decision making in groundwater recharge. The paper presents development of GIS based model to provide more accuracy in identification and suitability analysis for finding out zones and locating suitable sites with suggested structures for artificial recharge. Satellite images were used to prepare the geomorphological and land use maps. For site selection, the items such as slope, surface infiltration, and order of drainage were generated and integrated in GIS using Weighted Index Overlay Analysis and Boolean logics. Similarly for identification of suitable structures, complex matrix was programmed based on local climatic, topographic, hydrogeologic and landuse conditions as per artificial recharge manual of Central Ground Water Board, India. The GIS based algorithm is implemented in a user-friendly way using arc macro language on Arc/Info platform.

  16. Modeling interdependent socio-technical networks: The smart grid—an agent-based modeling approach

    NARCIS (Netherlands)

    Worm, D.; Langley, D.J.; Becker, J.

    2014-01-01

    The aim of this paper is to improve scientific modeling of interdependent socio-technical networks. In these networks the interplay between technical or infrastructural elements on the one hand and social and behavioral aspects on the other hand, plays an important role. Examples include electricity

  17. A gradient-descent-based approach for transparent linguistic interface generation in fuzzy models.

    Science.gov (United States)

    Chen, Long; Chen, C L Philip; Pedrycz, Witold

    2010-10-01

    Linguistic interface is a group of linguistic terms or fuzzy descriptions that describe variables in a system utilizing corresponding membership functions. Its transparency completely or partly decides the interpretability of fuzzy models. This paper proposes a GRadiEnt-descEnt-based Transparent lInguistic iNterface Generation (GREETING) approach to overcome the disadvantage of traditional linguistic interface generation methods where the consideration of the interpretability aspects of linguistic interface is limited. In GREETING, the widely used interpretability criteria of linguistic interface are considered and optimized. The numeric experiments on the data sets from University of California, Irvine (UCI) machine learning databases demonstrate the feasibility and superiority of the proposed GREETING method. The GREETING method is also applied to fuzzy decision tree generation. It is shown that GREETING generates better transparent fuzzy decision trees in terms of better classification rates and comparable tree sizes.

  18. Quantification of Uncertainties in Turbulence Modeling: A Comparison of Physics-Based and Random Matrix Theoretic Approaches

    CERN Document Server

    Wang, Jian-Xun; Xiao, Heng

    2016-01-01

    Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with the maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. In this work, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in ...

  19. A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards

    Science.gov (United States)

    Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.

    Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show

  20. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  1. A conceptual model of food quality management functions based on a techno-managerial approach

    NARCIS (Netherlands)

    Luning, P.A.; Marcelis, W.J.

    2007-01-01

    In agribusiness and food industry quality management problems are commonly approached by applying control systems and procedures. However, assuming predictable food systems and people following procedures seems too straightforward. This paper introduces a model of food quality management functions t

  2. A conceptual model of food quality management functions based on a techno-managerial approach

    NARCIS (Netherlands)

    Luning, P.A.; Marcelis, W.J.

    2007-01-01

    In agribusiness and food industry quality management problems are commonly approached by applying control systems and procedures. However, assuming predictable food systems and people following procedures seems too straightforward. This paper introduces a model of food quality management functions

  3. Chromium-based rings within the DFT and Falicov-Kimball model approach

    Science.gov (United States)

    Brzostowski, B.; Lemański, R.; Ślusarski, T.; Tomecka, D.; Kamieniarz, G.

    2013-04-01

    We present a comprehensive study of electronic and magnetic properties of octometallic homo- and heteronuclear chromium-based molecular rings Cr7MF8(O2CH)16 (in short Cr7M, M = Cr, Cd and Ni) by the first-principle density functional theory (DFT) and pseudopotential ideas. Their radii are around 1 nm. For each Cr7M, the antiferromagnetic configuration corresponds to the ground state and the ferromagnetic (high spin HS) configuration to the highest energy state. Using the broken symmetry (BS) approach, the differences between the total energies of the HS configuration and all the nonequivalent low spin configurations with s = ±3/2 are calculated and exploited to extract the coupling parameters J between the magnetic ions. Magnetic moments are found to be well localised on the Cr and Ni centres, although the localisation of spin density on Ni is weaker. Having calculated the excess energies for an unprecedented number of configurations, a family of the Ising-like models with the nearest- and the next-nearest-neighbour interactions has been considered. For each Cr7M, the values of the interaction parameters found within the unprojected method are coherent, despite the overdetermination problem and demonstrate that the next-nearest-neighbour couplings are negligible. The DFT estimates of the nearest-neighbour coupling calculated are overestimated and the relation J Cr-Cr/ J Cr-Ni Kimball (FK) model is suggested. In our approach, the effective magnetic interactions between ions are generated by local (on-site) Hund couplings between the ions and itinerant electrons. We demonstrate that the BS state energies obtained within DFT for Cr7M can be successfully represented by the FK model with a unique set of parameters.

  4. A Model-Based Approach for Planning and Developing a Family of Technology-Based Products

    OpenAIRE

    V. Krishnan; Rahul Singh; Devanath Tirupati

    1999-01-01

    In this paper, we address the product-family design problem of a firm in a market in which customers choose products based on some measure of product performance. By developing products as a family, the firm can reduce the cost of developing individual product variants due to the reuse of a common product platform. Such a platform, designed in an aggregate-planning phase that precedes the development of individual product variants, is itself expensive to develop. Hence, its costs must be weig...

  5. Comprehensive model of annual plankton succession based on the whole-plankton time series approach.

    Science.gov (United States)

    Romagnan, Jean-Baptiste; Legendre, Louis; Guidi, Lionel; Jamet, Jean-Louis; Jamet, Dominique; Mousseau, Laure; Pedrotti, Maria-Luiza; Picheral, Marc; Gorsky, Gabriel; Sardet, Christian; Stemmann, Lars

    2015-01-01

    Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes) and larger (i.e. macroplankton) organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available). Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche) weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model) characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.

  6. Comprehensive model of annual plankton succession based on the whole-plankton time series approach.

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Romagnan

    Full Text Available Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes and larger (i.e. macroplankton organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available. Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.

  7. An approach for model-based energy cost analysis of industrial automation systems

    Energy Technology Data Exchange (ETDEWEB)

    Beck, A.; Goehner, P. [Institute of Industrial Automation and Software Engineering, University of Stuttgart, Pfaffenwaldring 47, 70550 Stuttgart (Germany)

    2012-08-15

    Current energy reports confirm the steadily dilating gap between available conventional energy resources and future energy demand. This gap results in increasing energy costs and has become a determining factor in economies. Hence, politics, industry, and research focus either on regenerative energy resources or on energy-efficient concepts, methods, and technologies for energy-consuming devices. A remaining challenge is energy optimization of complex systems during their operation time. In addition to optimization measures that can be applied in development and engineering, the generation of optimization measures that are customized to the specific dynamic operational situation, promise high-cost saving potentials. During operation time, the systems are located in unique situations and environments and are operated according to individual requirements of their users. Hence, in addition to complexity of the systems, individuality and dynamic variability of their surroundings during operation time complicate identification of goal-oriented optimization measures. This contribution introduces a model-based approach for user-centric energy cost analysis of industrial automation systems. The approach allows automated generation and appliance of individual optimization proposals. Focus of this paper is on a basic variant for a single industrial automation system and its operational parameters.

  8. Approximation of skewed interfaces with tensor-based model reduction procedures: Application to the reduced basis hierarchical model reduction approach

    Science.gov (United States)

    Ohlberger, Mario; Smetana, Kathrin

    2016-09-01

    In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.

  9. The localization and correction of errors in models: a constraint-based approach

    OpenAIRE

    Piechowiak, S.; Rodriguez, J

    2005-01-01

    Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.

  10. A new methodology for building energy benchmarking: An approach based on clustering concept and statistical models

    Science.gov (United States)

    Gao, Xuefeng

    Though many building energy benchmarking programs have been developed during the past decades, they hold certain limitations. The major concern is that they may cause misleading benchmarking due to not fully considering the impacts of the multiple features of buildings on energy performance. The existing methods classify buildings according to only one of many features of buildings -- the use type, which may result in a comparison between two buildings that are tremendously different in other features and not properly comparable as a result. This research aims to tackle this challenge by proposing a new methodology based on the clustering concept and statistical analysis. The clustering concept, which reflects on machine learning algorithms, classifies buildings based on a multi-dimensional domain of building features, rather than the single dimension of use type. Buildings with the greatest similarity of features that influence energy performance are classified into the same cluster, and benchmarked according to the centroid reference of the cluster. Statistical analysis is applied to find the most influential features impacting building energy performance, as well as provide prediction models for the new design energy consumption. The proposed methodology as applicable to both existing building benchmarking and new design benchmarking was discussed in this dissertation. The former contains four steps: feature selection, clustering algorithm adaptation, results validation, and interpretation. The latter consists of three parts: data observation, inverse modeling, and forward modeling. The experimentation and validation were carried out for both perspectives. It was shown that the proposed methodology could account for the total building energy performance and was able to provide a more comprehensive approach to benchmarking. In addition, the multi-dimensional clustering concept enables energy benchmarking among different types of buildings, and inspires a new

  11. Recent developments in predictive uncertainty assessment based on the model conditional processor approach

    Directory of Open Access Journals (Sweden)

    G. Coccia

    2011-10-01

    example, the data set provided by the NOAA's National Weather Service, within the DMIP 2 Project, allowed two physically based models, the TOPKAPI model and TETIS model, to be calibrated and a data driven model to be implemented using the Artificial Neural Network. The three model forecasts have been combined with the aim of reducing the PU and improving the probabilistic forecast taking advantage of the different capabilities of each model approach.

  12. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    NARCIS (Netherlands)

    Bunse, C.; Gross, H.G.; Peper, C.

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quali

  13. An ontology-based approach for evaluating the domain appropriateness and comprehensibility appropriateness of modeling languages

    NARCIS (Netherlands)

    Guizzardi, G.; Ferreira Pires, Luis; van Sinderen, Marten J.; Briand, L.; Williams, C.

    2005-01-01

    In this paper we present a framework for the evaluation and (re)design of modeling languages. We focus here on the evaluation of the suitability of a language to model a set or real-world phenomena in a given domain. In our approach, this property can be systematically evaluated by comparing the

  14. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    NARCIS (Netherlands)

    Bunse, C.; Gross, H.G.; Peper, C.

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quali

  15. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  16. Transaction based approach

    Science.gov (United States)

    Hunka, Frantisek; Matula, Jiri

    2017-07-01

    Transaction based approach is utilized in some methodologies in business process modeling. Essential parts of these transactions are human beings. The notion of agent or actor role is usually used for them. The paper on a particular example describes possibilities of Design Engineering Methodology for Organizations (DEMO) and Resource-Event-Agent (REA) methodology. Whereas the DEMO methodology can be regarded as a generic methodology having its foundation in the theory of Enterprise Ontology the REA methodology is regarded as the domain specific methodology and has its origin in accountancy systems. The results of these approaches is that the DEMO methodology captures everything that happens in the reality with a good empirical evidence whereas the REA methodology captures only changes connected with economic events. Economic events represent either change of the property rights to economic resource or consumption or production of economic resources. This results from the essence of economic events and their connection to economic resources.

  17. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    OpenAIRE

    Santiago-Omar Caballero-Morales

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger,...

  18. A Model-Based Approach to Engineering Behavior of Complex Aerospace Systems

    Science.gov (United States)

    Ingham, Michel; Day, John; Donahue, Kenneth; Kadesch, Alex; Kennedy, Andrew; Khan, Mohammed Omair; Post, Ethan; Standley, Shaun

    2012-01-01

    One of the most challenging yet poorly defined aspects of engineering a complex aerospace system is behavior engineering, including definition, specification, design, implementation, and verification and validation of the system's behaviors. This is especially true for behaviors of highly autonomous and intelligent systems. Behavior engineering is more of an art than a science. As a process it is generally ad-hoc, poorly specified, and inconsistently applied from one project to the next. It uses largely informal representations, and results in system behavior being documented in a wide variety of disparate documents. To address this problem, JPL has undertaken a pilot project to apply its institutional capabilities in Model-Based Systems Engineering to the challenge of specifying complex spacecraft system behavior. This paper describes the results of the work in progress on this project. In particular, we discuss our approach to modeling spacecraft behavior including 1) requirements and design flowdown from system-level to subsystem-level, 2) patterns for behavior decomposition, 3) allocation of behaviors to physical elements in the system, and 4) patterns for capturing V&V activities associated with behavioral requirements. We provide examples of interesting behavior specification patterns, and discuss findings from the pilot project.

  19. Cluster approach to forming innovative model of developing mineral resources base of Russia’s regions

    Directory of Open Access Journals (Sweden)

    Andrey Gennad'evich Shelomentsev

    2012-03-01

    Full Text Available In this paper, the necessity of applying innovative model of developing mineral resources base of Russia’s regions and relevance of cluster approach for forming this model are proved. Components of process of clustering in the case of innovative developing mineral raw complex are proposed and analyzed: consolidation of socioeconomic potential of region, consolidation of potential of different branches of people’s activities, consolidation of processes of primary (wining sector in the single chain. In particular, the first component implies concentration of population in certain centers of gravitation. The second component implies consolidation of education, fundamental as well as applying science and production. The creating of administrative nets is necessary for that. For the realization of the first and the second components, the availability of clustering organization is necessary. The third component of process of clustering implies in prospect the adding of increasing amount of stages of product manufacturing. Eventually, the multi-stage structure of innovative process is analyzed.

  20. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  1. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  2. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Fillippi, Anthony [Texas A& M University; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  3. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Filippi, Anthony M [ORNL; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  4. A musculo-mechanical model of esophageal transport based on an immersed boundary-finite element approach

    Science.gov (United States)

    Kou, Wenjun; Griffith, Boyce E.; Pandolfino, John E.; Kahrilas, Peter J.; Patankar, Neelesh A.

    2015-11-01

    This work extends a fiber-based immersed boundary (IB) model of esophageal transport by incorporating a continuum model of the deformable esophageal wall. The continuum-based esophagus model adopts finite element approach that is capable of describing more complex and realistic material properties and geometries. The leakage from mismatch between Lagrangian and Eulerian meshes resulting from large deformations of the esophageal wall is avoided by careful choice of interaction points. The esophagus model, which is described as a multi-layered, fiber-reinforced nonlinear elastic material, is coupled to bolus and muscle-activation models using the IB approach to form the esophageal transport model. Cases of esophageal transport with different esophagus models are studied. Results on the transport characteristics, including pressure field and esophageal wall kinematics and stress, are analyzed and compared. Support from NIH grant R01 DK56033 and R01 DK079902 is gratefully acknowledged. BEG is supported by NSF award ACI 1460334.

  5. On model checking the dynamics of object-based software : a foundational approach

    NARCIS (Netherlands)

    Distefano, Dino Salvo

    2003-01-01

    This dissertation is concerned with software verication, in particular automated techniques to assess the correct functioning of object-based programs. We focus on the dynamic aspects of these programs and consider model-checking based verication techniques. The major obstacle to the design of model

  6. A fully traits-based approach to modeling global vegetation distribution.

    Science.gov (United States)

    van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M

    2014-09-23

    Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.

  7. Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach

    Science.gov (United States)

    Liu, Hui; Sun, Dihua; Liu, Weining

    2016-11-01

    Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.

  8. Fuzzy-hybrid land vehicle driveline modelling based on a moving window subtractive clustering approach

    Science.gov (United States)

    Economou, J. T.; Knowles, K.; Tsourdos, A.; White, B. A.

    2011-02-01

    In this article, the fuzzy-hybrid modelling (FHM) approach is used and compared to the input-output system Takagi-Sugeno (TS) modelling approach which correlates the drivetrain power flow equations with the vehicle dynamics. The output power relations were related to the drivetrain bounded efficiencies and also to the wheel slips. The model relates also to the wheel and ground interactions via suitable friction coefficient models relative to the wheel slip profiles. The wheel slip had a significant efficiency contribution to the overall driveline system efficiency. The peak friction slip and peak coefficient of friction values are known a priori during the analysis. Lastly, the rigid body dynamical power has been verified through both simulation and experimental results. The mathematical analysis has been supported throughout the paper via experimental data for a specific electric robotic vehicle. The identification of the localised and input-output TS models for the fuzzy hybrid and the experimental data were obtained utilising the subtractive clustering (SC) methodology. These results were also compared to a real-time TS SC approach operating on periodic time windows. This article concludes with the benefits of the real-time FHM method for the vehicle electric driveline due to the advantage of both the analytical TS sub-model and the physical system modelling for the remaining process which can be clearly utilised for control purposes.

  9. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    CERN Document Server

    Raia, S; Rossi, M; Baum, R L; Godt, J W; Guzzetti, F

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the TRIG...

  10. An agent-based modeling approach for determining corn stover removal rate and transboundary effects.

    Science.gov (United States)

    Gan, Jianbang; Langeveld, J W A; Smith, C T

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  11. Applying of an Ontology based Modeling Approach to Cultural Heritage Systems

    Directory of Open Access Journals (Sweden)

    POPOVICI, D.-M.

    2011-08-01

    Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.

  12. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    Science.gov (United States)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  13. Regulation of Neutrophil Degranulation and Cytokine Secretion: A Novel Model Approach Based on Linear Fitting

    Directory of Open Access Journals (Sweden)

    Isabelle Naegelen

    2015-01-01

    Full Text Available Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes.

  14. Regulation of Neutrophil Degranulation and Cytokine Secretion: A Novel Model Approach Based on Linear Fitting

    Science.gov (United States)

    Naegelen, Isabelle; Beaume, Nicolas; Plançon, Sébastien; Schenten, Véronique; Tschirhart, Eric J.; Bréchard, Sabrina

    2015-01-01

    Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i) define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii) investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes. PMID:26579547

  15. Towards a model-based development approach for wireless sensor-actuator network protocols

    DEFF Research Database (Denmark)

    Kumar S., A. Ajith; Simonsen, Kent Inge

    2014-01-01

    Model-Driven Software Engineering (MDSE) is a promising approach for the development of applications, and has been well adopted in the embedded applications domain in recent years. Wireless Sensor Actuator Networks consisting of resource constrained hardware and platformspecific operating system...

  16. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    DEFF Research Database (Denmark)

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter

    2008-01-01

    measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...

  17. Flexibility on storage-release based distributed hydrologic modeling with object-oriented approach

    Science.gov (United States)

    With the availability of advanced hydrologic data in the public domain such as remotely sensed and climate change scenario data, there is a need for a modeling framework that is capable of using these data to simulate and extend hydrologic processes with multidisciplinary approaches for sustainable ...

  18. Recent approaches to quadrupole collectivity: models, solutions and applications based on the Bohr hamiltonian

    Science.gov (United States)

    Buganu, Petricǎ; Fortunato, Lorenzo

    2016-09-01

    We review and discuss several recent approaches to quadrupole collectivity and developments of collective models and their solutions with many applications, examples and references. We focus in particular on analytic and approximate solutions of the Bohr hamiltonian of the last decade, because most of the previously published material has been already reviewed in other publications.

  19. An MDA-based approach for behaviour modelling of context-aware mobile applications

    NARCIS (Netherlands)

    Daniele, Laura M.; Ferreira Pires, Luis; Sinderen, van Marten

    2009-01-01

    Most reported MDA approaches give much attention to structural aspects in PSMs and in generated code, and less attention to the PIM level and the behaviour of the modelled applications. Consequently, application behaviour is generally not (well) defined at the PIM level. This paper presents an MDA-b

  20. A Role-Based Approach to Adult Development: The Triple-Helix Model.

    Science.gov (United States)

    Juhasz, Anne McCreary

    1989-01-01

    Presents triple-helix model of adult development which incorporates three major roles: family, work, and self, each powered by drive for self-esteem. Asserts that this approach accommodates wide range of possible patterns and varied timing of life events relative to career options, family and relationship choices, and emphasis on self-development.…

  1. Modelling of cooperating robotized systems with the use of object-based approach

    Science.gov (United States)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.

  2. DC thermal modeling of CNTFETs based on a semi-empirical approach

    CERN Document Server

    Marani, Roberto

    2015-01-01

    A new DC thermal model of Carbon Nanotube Field Effect Transistors (CNTFETs) is proposed. The model is based on a number of fitting parameters depending on bias conditions by third order polynomials. The model includes three thermal parameters describing CNTFET behaviour in terms of saturation drain current, threshold voltage and M exponent in the knee region versus the temperature. To confirm the validity of the proposed thermal model, the simulations were performed in very different thermal conditions, obtaining I-V characteristics perfectly coincident with those of other models. The very low CPU calculation time makes the proposed model particularly suitable to be implemented in CAD applications.

  3. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment.

  4. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  5. Approach and development strategy for an agent-based model of economic confidence.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.; Pryor, Richard J.; Jorgensen, Craig Reed

    2004-08-01

    We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.

  6. Predicting the acute neurotoxicity of diverse organic solvents using probabilistic neural networks based QSTR modeling approaches.

    Science.gov (United States)

    Basant, Nikita; Gupta, Shikha; Singh, Kunwar P

    2016-03-01

    Organic solvents are widely used chemicals and the neurotoxic properties of some are well established. In this study, we established nonlinear qualitative and quantitative structure-toxicity relationship (STR) models for predicting neurotoxic classes and neurotoxicity of structurally diverse solvents in rodent test species following OECD guideline principles for model development. Probabilistic neural network (PNN) based qualitative and generalized regression neural network (GRNN) based quantitative STR models were constructed using neurotoxicity data from rat and mouse studies. Further, interspecies correlation based quantitative activity-activity relationship (QAAR) and global QSTR models were also developed using the combined data set of both rodent species for predicting the neurotoxicity of solvents. The constructed models were validated through deriving several statistical coefficients for the test data and the prediction and generalization abilities of these models were evaluated. The qualitative STR models (rat and mouse) yielded classification accuracies of 92.86% in the test data sets, whereas, the quantitative STRs yielded correlation (R(2)) of >0.93 between the measured and model predicted toxicity values in both the test data (rat and mouse). The prediction accuracies of the QAAR (R(2) 0.859) and global STR (R(2) 0.945) models were comparable to those of the independent local STR models. The results suggest the ability of the developed QSTR models to reliably predict binary neurotoxicity classes and the endpoint neurotoxicities of the structurally diverse organic solvents.

  7. Model based approach to Study the Impact of Biofuels on the Sustainability of an Ecological System

    Science.gov (United States)

    The importance and complexity of sustainability has been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between various components of the ecosystem could be nonlinear, intertwined and non intuitive...

  8. A model-based approach to studying changes in compositional heterogeneity

    NARCIS (Netherlands)

    Baeten, L.; Warton, D.; Calster, van H.; Frenne, De P.; Verstraeten, G.; Bonte, D.; Bernhardt-Romermann, M.; Cornelis, R.; Decocq, G.; Eriksson, O.; Hommel, P.W.F.M.

    2014-01-01

    1. Non-random species loss and gain in local communities change the compositional heterogeneity between communities over time, which is traditionally quantified with dissimilarity-based approaches. Yet, dissimilarities summarize the multivariate species data into a univariate index and obscure the s

  9. Learning Outcomes in Vocational Education: A Business Plan Development by Production-Based Learning Model Approach

    Science.gov (United States)

    Kusumaningrum, Indrati; Hidayat, Hendra; Ganefri; Anori, Sartika; Dewy, Mega Silfia

    2016-01-01

    This article describes the development of a business plan by using production-based learning approach. In addition, this development also aims to maximize learning outcomes in vocational education. Preliminary analysis of curriculum and learning and the needs of the market and society become the basic for business plan development. To produce a…

  10. An efficient approach to bioconversion kinetic model generation based on automated microscale experimentation integrated with model driven experimental design

    DEFF Research Database (Denmark)

    Chen, B. H.; Micheletti, M.; Baganz, F.;

    2009-01-01

    design. It incorporates a model driven approach to the experimental design that minimises the number of experiments to be performed, while still generating accurate values of kinetic parameters. The approach has been illustrated with the transketolase mediated asymmetric synthesis of L...... experimental design.]it comparison with conventional methodology, the modelling approach enabled a nearly 4-fold decrease in the number of experiments while the microwell experimentation enabled a 45-fold decrease in material requirements and a significant increase in experimental throughput. The approach......Reliable models of enzyme kinetics are required for the effective design of bioconversion processes. Kinetic expressions of the enzyme-catalysed reaction rate however, are frequently complex and establishing accurate values of kinetic parameters normally requires a large number of experiments...

  11. Towards Robust Energy Systems Modeling: Examinging Uncertainty in Fossil Fuel-Based Life Cycle Assessment Approaches

    Science.gov (United States)

    Venkatesh, Aranya

    Increasing concerns about the environmental impacts of fossil fuels used in the U.S. transportation and electricity sectors have spurred interest in alternate energy sources, such as natural gas and biofuels. Life cycle assessment (LCA) methods can be used to estimate the environmental impacts of incumbent energy sources and potential impact reductions achievable through the use of alternate energy sources. Some recent U.S. climate policies have used the results of LCAs to encourage the use of low carbon fuels to meet future energy demands in the U.S. However, the LCA methods used to estimate potential reductions in environmental impact have some drawbacks. First, the LCAs are predominantly based on deterministic approaches that do not account for any uncertainty inherent in life cycle data and methods. Such methods overstate the accuracy of the point estimate results, which could in turn lead to incorrect and (consequent) expensive decision-making. Second, system boundaries considered by most LCA studies tend to be limited (considered a manifestation of uncertainty in LCA). Although LCAs can estimate the benefits of transitioning to energy systems of lower environmental impact, they may not be able to characterize real world systems perfectly. Improved modeling of energy systems mechanisms can provide more accurate representations of reality and define more likely limits on potential environmental impact reductions. This dissertation quantitatively and qualitatively examines the limitations in LCA studies outlined previously. The first three research chapters address the uncertainty in life cycle greenhouse gas (GHG) emissions associated with petroleum-based fuels, natural gas and coal consumed in the U.S. The uncertainty in life cycle GHG emissions from fossil fuels was found to range between 13 and 18% of their respective mean values. For instance, the 90% confidence interval of the life cycle GHG emissions of average natural gas consumed in the U.S was found to

  12. Exponentiated Weibull distribution approach based inflection S-shaped software reliability growth model

    Directory of Open Access Journals (Sweden)

    B.B. Sagar

    2016-09-01

    Full Text Available The aim of this paper was to estimate the number of defects in software and remove them successfully. This paper incorporates Weibull distribution approach along with inflection S-shaped Software Reliability Growth Models (SRGM. In this combination two parameter Weibull distribution methodology is used. Relative Prediction Error (RPE is calculated to predict the validity criterion of the developed model. Experimental results on actual data from five data sets are compared with two other existing models, which expose that the proposed software reliability growth model predicts better estimation to remove the defects. This paper presents best software reliability growth model with including feature of both Weibull distribution and inflection S-shaped SRGM to estimate the defects of software system, and provide help to researchers and software industries to develop highly reliable software products.

  13. A Novel Modelling Approach for Condensing Boilers Based on Hybrid Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Harish Satyavada

    2016-04-01

    Full Text Available Condensing boilers use waste heat from flue gases to pre-heat cold water entering the boiler. Flue gases are condensed into liquid form, thus recovering their latent heat of vaporization, which results in as much as 10%–12% increase in efficiency. Modeling these heat transfer phenomena is crucial to control this equipment. Despite the many approaches to the condensing boiler modeling, the following shortcomings are still not addressed: thermal dynamics are oversimplified with a nonlinear efficiency curve (which is calculated at steady-state; the dry/wet heat exchange is modeled in a fixed proportion. In this work we cover these shortcomings by developing a novel hybrid dynamic model which avoids the static nonlinear efficiency curve and accounts for a time-varying proportion of dry/wet heat exchange. The procedure for deriving the model is described and the efficiency of the resulting condensing boiler is shown.

  14. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    Institute of Scientific and Technical Information of China (English)

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  15. Replacement Value - Representation of Fair Value in Accounting. Techniques and Modeling Suitable for the Income Based Approach

    OpenAIRE

    MANEA MARINELA – DANIELA

    2011-01-01

    The term fair value is spread within the sphere of international standards without reference to any detailed guidance on how to apply. However, specialized tangible assets, which are rarely sold, the rule IAS 16 "Intangible assets " makes it possible to estimate fair value using an income approach or a replacement cost or depreciation. The following material is intended to identify potential modeling of fair value as an income-based approach, appealing to techniques used by professional evalu...

  16. Equilibrium and non-equilibrium concepts in forest genetic modelling: population- and individually-based approaches

    NARCIS (Netherlands)

    Kramer, K.; Werf, van der D.C.

    2010-01-01

    The environment is changing and so are forests, in their functioning, in species composition, and in the species’ genetic composition. Many empirical and process-based models exist to support forest management. However, most of these models do not consider the impact of environmental changes and for

  17. A review of single-sample-based models and other approaches for radiocarbon dating of dissolved inorganic carbon in groundwater

    Science.gov (United States)

    Han, L. F; Plummer, Niel

    2016-01-01

    Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of

  18. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    Science.gov (United States)

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  19. PERPEST model, a case-based reasoning approach to predict ecological risks of pesticides

    NARCIS (Netherlands)

    Brink, van den P.J.; Roelsma, J.; Nes, van E.H.; Scheffer, M.; Brock, T.C.M.

    2002-01-01

    The present paper discusses PERPEST, a model that uses case-based reasoning to predict the effects of a particular concentration of a pesticide on a defined aquatic ecosystem, based on published information about the effects of pesticides on the structure and function of aquatic ecosystems as observ

  20. Stochastic Modeling based on Dictionary Approach for the Generation of Daily Precipitation Occurrences

    Science.gov (United States)

    Panu, U. S.; Ng, W.; Rasmussen, P. F.

    2009-12-01

    The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in

  1. An efficient approach for shadow detection based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    韩延祥; 张志胜; 陈芳; 陈恺

    2014-01-01

    An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.

  2. Fast simulation approaches for power fluctuation model of wind farm based on frequency domain

    DEFF Research Database (Denmark)

    Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang

    2012-01-01

    This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....

  3. Agent-Based Models and Optimal Control in Biology: A Discrete Approach

    Science.gov (United States)

    2012-01-01

    different parts of the human body to cure diseases such as hypertension, cancer, or heart disease. And we need to control microbes for the efficient...dynamics to remain the same, and how we can verify that this is indeed the case. Since we are using the model with a specific control objective in mind ...similar to the approach pioneered by Descartes and his introduction of a coordinate system. In the plane, for instance, a Cartesian coordinate system

  4. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  5. A Computational Agent-Based Modeling Approach for Competitive Wireless Service Market

    KAUST Repository

    Douglas, C C

    2011-04-01

    Using an agent-based modeling method, we study market dynamism with regard to wireless cellular services that are in competition for a greater market share and profit. In the proposed model, service providers and consumers are described as agents who interact with each other and actively participate in an economically well-defined marketplace. Parameters of the model are optimized using the Levenberg-Marquardt method. The quantitative prediction capabilities of the proposed model are examined through data reproducibility using past data from the U.S. and Korean wireless service markets. Finally, we investigate a disruptive market event, namely the introduction of the iPhone into the U.S. in 2007 and the resulting changes in the modeling parameters. We predict and analyze the impacts of the introduction of the iPhone into the Korean wireless service market assuming a release date of 2Q09 based on earlier data. © 2011 IEEE.

  6. Putting theory to the test: modeling a multidimensional, developmentally-based approach to preschool disruptive behavior.

    Science.gov (United States)

    Wakschlag, Lauren S; Henry, David B; Tolan, Patrick H; Carter, Alice S; Burns, James L; Briggs-Gowan, Margaret J

    2012-06-01

    There is increasing emphasis on dimensional conceptualizations of psychopathology, but empirical evidence of their utility is just emerging. In particular, although a range of multidimensional models have been proposed, the relative fit of competing models has rarely been tested. Furthermore, developmental considerations have received scant attention. In this study, we tested a developmentally based, four-dimensional model of disruptive behavior theorized to represent the defining features of disruptive behavior at preschool age: Temper Loss, Noncompliance, Aggression, and Low Concern for Others. Model testing was conducted in two independent samples of preschoolers: Clinically Enriched Sample (n = 336) and Epidemiologic Sample (n = 532). The tau-equivalent confirmatory factor analyses were used to test the fit of the Developmental Model relative to three leading competing models (DSM opositional defiant disorder (ODD)/conduct disorder (CD) Model, "Callous" Model, and an "Irritable/Headstrong/Hurtful" Model). Reliability of the four dimensions was also tested. Validity of the dimensions was tested by predicting multi-informant, multi-method ratings of disruptive behavior and impairment, and incremental utility relative to DSM symptoms. In both samples, the Developmental Model demonstrated a superior fit compared with the competing models within the full sample, and across key demographic subgroups. Validity was also demonstrated, including incremental utility relative to DSM-IV disruptive behavior symptoms. Critical next steps for achieving scientific consensus about the optimal dimensional model of disruptive behavior and its clinical application are discussed. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  7. A Thermodynamic Approach to Predict Formation Enthalpies of Ternary Systems Based on Miedema's Model

    Science.gov (United States)

    Mousavi, Mahbubeh Sadat; Abbasi, Roozbeh; Kashani-Bozorg, Seyed Farshid

    2016-07-01

    A novel modification to the thermodynamic semi-empirical Miedema's model has been made in order to provide more precise estimations of formation enthalpy in ternary alloys. The original Miedema's model was modified for ternary systems based on surface concentration function revisions. The results predicted by the present model were found to be in excellent agreement with the available experimental data of over 150 ternary intermetallic compounds. The novel proposed model is capable of predicting formation enthalpies of ternary intermetallics with small discrepancies of ≤20 kJ/mol as well as providing reliable enthalpy variations.

  8. Information theory-based approach for modeling the cognitive behavior of NPP operators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [KAIST, Taejon (Korea, Republic of)

    2001-10-01

    An NPP system consists of three important components: the machine system, operators, and MMI. Through the MMI, operators monitor and control the plant system. The cognitive model of NPP operators has become a target of modeling by cognitive engineers due to their work environment: complex, uncertain, and safe critical. We suggested the contextual model for the cognitive behavior of NPP operator and the mathematical fundamentals based on information theory which can quantify the model. The demerit of the methodology using the information theory is that it cannot evaluate the correctness and quality of information. Therefore, the validation through the experiment is needed.

  9. A process flood typology along an Alpine transect: analysis based on observations and modelling approaches

    Science.gov (United States)

    Zoccatelli, Davide; Parajka, Juraj; Gaál, Ladislav; Blöschl, Günter; Borga, Marco

    2014-05-01

    Understanding the effects of climate changes on river floods requires a better understanding of the control of climate variability on flood regimes. The aim of this work is to identify the process types of causative mechanisms of floods along a longitudinal Alpine transect spanning 200 km from Verona in Italy to lower Germany. The investigation is focused on the analysis of the statistical properties of the various flood typologies, their spatial organization and their relation with the topography of the transect. Along the transect, 34 basins were selected following criteria of basin size (between 50 and 500 km2), amount of hydrometeorological data available and impact of hydraulic structures on runoff regime. Around 20 years of hourly data of discharge, precipitation and temperature were collected for each basin. The three most intense floods occurred each year are considered in the work. Precipitation and temperature follow a sharp gradient across the transect, with both precipitation and temperature low around the main alpine ridge. Four flood types are considered: long-rain floods, flash floods, rain-on-snow floods, and snowmelt floods. For the classification we use a combination of a number of process indicators, including the timing of the floods, storm duration, rainfall depths, snowmelt contribution to runoff, initial catchment state and runoff response dynamics, using a procedure similar to what described in Merz and Blöschl (2003). The indicators for flood classification are derived based on either observed discharge data and model results. Comparison between the two derived flood classifications allows one to analyse the viability of using a model approach to build flood typologies in basins characterized by varying data availability. Finally, a sensitivity analysis is carried out by imposing step changes to the precipitation and temperature pattern. The resulting distribution of flood types gives an insight on the possible change in floods

  10. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Science.gov (United States)

    2009-01-01

    Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754

  11. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Directory of Open Access Journals (Sweden)

    Parker Louise E

    2009-09-01

    Full Text Available Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA. Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  12. A Multi-Objective Approach for Protein Structure Prediction Based on an Energy Model and Backbone Angle Preferences.

    Science.gov (United States)

    Tsay, Jyh-Jong; Su, Shih-Chieh; Yu, Chin-Sheng

    2015-07-03

    Protein structure prediction (PSP) is concerned with the prediction of protein tertiary structure from primary structure and is a challenging calculation problem. After decades of research effort, numerous solutions have been proposed for optimisation methods based on energy models. However, further investigation and improvement is still needed to increase the accuracy and similarity of structures. This study presents a novel backbone angle preference factor, which is one of the factors inducing protein folding. The proposed multiobjective optimisation approach simultaneously considers energy models and backbone angle preferences to solve the ab initio PSP. To prove the effectiveness of the multiobjective optimisation approach based on the energy models and backbone angle preferences, 75 amino acid sequences with lengths ranging from 22 to 88 amino acids were selected from the CB513 data set to be the benchmarks. The data sets were highly dissimilar, therefore indicating that they are meaningful. The experimental results showed that the root-mean-square deviation (RMSD) of the multiobjective optimization approach based on energy model and backbone angle preferences was superior to those of typical energy models, indicating that the proposed approach can facilitate the ab initio PSP.

  13. A Multi-Objective Approach for Protein Structure Prediction Based on an Energy Model and Backbone Angle Preferences

    Directory of Open Access Journals (Sweden)

    Jyh-Jong Tsay

    2015-07-01

    Full Text Available Protein structure prediction (PSP is concerned with the prediction of protein tertiary structure from primary structure and is a challenging calculation problem. After decades of research effort, numerous solutions have been proposed for optimisation methods based on energy models. However, further investigation and improvement is still needed to increase the accuracy and similarity of structures. This study presents a novel backbone angle preference factor, which is one of the factors inducing protein folding. The proposed multiobjective optimisation approach simultaneously considers energy models and backbone angle preferences to solve the ab initio PSP. To prove the effectiveness of the multiobjective optimisation approach based on the energy models and backbone angle preferences, 75 amino acid sequences with lengths ranging from 22 to 88 amino acids were selected from the CB513 data set to be the benchmarks. The data sets were highly dissimilar, therefore indicating that they are meaningful. The experimental results showed that the root-mean-square deviation (RMSD of the multiobjective optimization approach based on energy model and backbone angle preferences was superior to those of typical energy models, indicating that the proposed approach can facilitate the ab initio PSP.

  14. Evaluating the mindfulness-based coping program: an effectiveness study using a mixed model approach

    Directory of Open Access Journals (Sweden)

    Edvin Bru

    2012-01-01

    Full Text Available Since more than 450 million people worldwide suffer from mental disorders, interventions that promote mental health have been called for. Mindfulness-based coping (MBC is an intervention based on coping skills from cognitive behavioral therapy integrating mindfulness practices. The aim of this study was to examine the effectiveness of the MBC program for psychiatric outpatients. The study employed a mixed research method with a qualitative approach using semi-structured patient interviews and clinical assessments from patients’ therapists and a quantitative approach using instruments measuring mindful coping, mental ill health, and life satisfaction. The study sample included 38 psychiatric outpatients from a district psychiatric outpatient service in Norway. Results suggested that although use of the different skills varied, participants had a positive experience with the program and positive changes in psychological functioning were observed. Findings provide knowledge regarding the design of interventions integrating mindfulness to promote more adequate psychological coping.

  15. Nonlinear scaling analysis approach of agent-based Potts financial dynamical model.

    Science.gov (United States)

    Hong, Weijia; Wang, Jun

    2014-12-01

    A financial agent-based price model is developed and investigated by one of statistical physics dynamic systems-the Potts model. Potts model, a generalization of the Ising model to more than two components, is a model of interacting spins on a crystalline lattice which describes the interaction strength among the agents. In this work, we investigate and analyze the correlation behavior of normalized returns of the proposed financial model by the power law classification scheme analysis and the empirical mode decomposition analysis. Moreover, the daily returns of Shanghai Composite Index and Shenzhen Component Index are considered, and the comparison nonlinear analysis of statistical behaviors of returns between the actual data and the simulation data is exhibited.

  16. A physically based approach to modelling radionuclide transport in the biosphere.

    Science.gov (United States)

    Parkin, G; Anderton, S P; Ewen, J; O'Donnell, G M; Thorne, M C; Crossland, I G

    1999-12-01

    Calculations of radiological risk are required to assess the safety of any potential future UK deep underground repository for intermediate-level and certain low-level solid radioactive wastes. In support of such calculations, contaminant movement and dilution in the terrestrial biosphere is investigated using the physically based modelling system SHETRAN. Two case studies are presented involving modelling of contaminants representing long-lived poorly sorbed radionuclides in the near-surface aquifers and surface waters of hypothetical catchments. The contaminants arise from diffuse sources at the base of the modelled aquifers. The catchments are characterised in terms of detailed spatial data for topography, the river network, soils and vegetation. Simulations are run for temperate and boreal climates representing possible future conditions at a repository site. Results are presented in terms of the concentration of contaminants in the aquifer, in soils and in surface waters; these are used to support the simpler models used in risk calculations.

  17. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    Science.gov (United States)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions

  18. Evaluation of conditional non-linear optimal perturbation obtained by an ensemble-based approach using the Lorenz-63 model

    Directory of Open Access Journals (Sweden)

    Xudong Yin

    2014-02-01

    Full Text Available The authors propose to implement conditional non-linear optimal perturbation related to model parameters (CNOP-P through an ensemble-based approach. The approach was first used in our earlier study and is improved to be suitable for calculating CNOP-P. Idealised experiments using the Lorenz-63 model are conducted to evaluate the performance of the improved ensemble-based approach. The results show that the maximum prediction error after optimisation has been multiplied manifold compared with the initial-guess prediction error, and is extremely close to, or greater than, the maximum value of the exhaustive attack method (a million random samples. The calculation of CNOP-P by the ensemble-based approach is capable of maintaining a high accuracy over a long prediction time under different constraints and initial conditions. Further, the CNOP-P obtained by the approach is applied to sensitivity analysis of the Lorenz-63 model. The sensitivity analysis indicates that when the prediction time is set to 0.2 time units, the Lorenz-63 model becomes extremely insensitive to one parameter, which leaves the other two parameters to affect the uncertainty of the model. Finally, a serial of parameter estimation experiments are performed to verify sensitivity analysis. It is found that when the three parameters are estimated simultaneously, the insensitive parameter is estimated much worse, but the Lorenz-63 model can still generate a very good simulation thanks to the relatively accurate values of the other two parameters. When only two sensitive parameters are estimated simultaneously and the insensitive parameter is left to be non-optimised, the outcome is better than the case when the three parameters are estimated simultaneously. With the increase of prediction time and observation, however, the model sensitivity to the insensitive parameter increases accordingly and the insensitive parameter can also be estimated successfully.

  19. An equation-free computational approach for extracting population-level behavior from individual-based models of biological dispersal

    CERN Document Server

    Erban, R; Othmer, H G; Erban, Radek; Kevrekidis, Ioannis G.; Othmer, Hans G.

    2005-01-01

    The movement of many organisms can be described as a random walk at either or both the individual and population level. The rules for this random walk are based on complex biological processes and it may be difficult to develop a tractable, quantitatively-accurate, individual-level model. However, important problems in areas ranging from ecology to medicine involve large collections of individuals, and a further intellectual challenge is to model population-level behavior based on a detailed individual-level model. Because of the large number of interacting individuals and because the individual-level model is complex, classical direct Monte Carlo simulations can be very slow, and often of little practical use. In this case, an equation-free approach may provide effective methods for the analysis and simulation of individual-based models. In this paper we analyze equation-free coarse projective integration. For analytical purposes, we start with known partial differential equations describing biological rando...

  20. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  1. MIRAGE: a functional genomics-based approach for metabolic network model reconstruction and its application to cyanobacteria networks.

    Science.gov (United States)

    Vitkin, Edward; Shlomi, Tomer

    2012-11-29

    Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files.

  2. Analysis of GARCH modeling in financial markets: an approach based on technical analysis strategies

    Directory of Open Access Journals (Sweden)

    Mircea Cristian Gherman

    2011-08-01

    Full Text Available In this paper we performed an analysis in order the make an evidence of GARCH modeling on the performances of trading rules applied for a stock market index. Our study relays on the overlap between econometrical modeling, technical analysis and a simulation computing technique. The non-linear structures presented in the daily returns of the analyzed index and also in other financial series, together with the phenomenon of volatility clustering are premises for applying a GARCH model. In our approach the standardized GARCH innovations are resampled using the bootstrap method. On the simulated data are then applied technical analysis trading strategies. For all the simulated paths the “p-values” are computed in order to verify that the hypothesis concerning the goodness of fit for GARCH model on the BET index is accepted. The processed data with trading rules are showing evidence that GARCH model is a good choice for econometrical modeling of financial time series including the romanian exchange trade index.

  3. Is equine colic seasonal? Novel application of a model based approach

    Directory of Open Access Journals (Sweden)

    Proudman Christopher J

    2006-08-01

    Full Text Available Abstract Background Colic is an important cause of mortality and morbidity in domesticated horses yet many questions about this condition remain to be answered. One such question is: does season have an effect on the occurrence of colic? Time-series analysis provides a rigorous statistical approach to this question but until now, to our knowledge, it has not been used in this context. Traditional time-series modelling approaches have limited applicability in the case of relatively rare diseases, such as specific types of equine colic. In this paper we present a modelling approach that respects the discrete nature of the count data and, using a regression model with a correlated latent variable and one with a linear trend, we explored the seasonality of specific types of colic occurring at a UK referral hospital between January 1995–December 2004. Results Six- and twelve-month cyclical patterns were identified for all colics, all medical colics, epiploic foramen entrapment (EFE, equine grass sickness (EGS, surgically treated and large colon displacement/torsion colic groups. A twelve-month cyclical pattern only was seen in the large colon impaction colic group. There was no evidence of any cyclical pattern in the pedunculated lipoma group. These results were consistent irrespective of whether we were using a model including latent correlation or trend. Problems were encountered in attempting to include both trend and latent serial dependence in models simultaneously; this is likely to be a consequence of a lack of power to separate these two effects in the presence of small counts, yet in reality the underlying physical effect is likely to be a combination of both. Conclusion The use of a regression model with either an autocorrelated latent variable or a linear trend has allowed us to establish formally a seasonal component to certain types of colic presented to a UK referral hospital over a 10 year period. These patterns appeared to coincide

  4. A Flexible Web-Based Approach to Modeling Tandem Photocatalytic Devices

    DEFF Research Database (Denmark)

    Seger, Brian; Hansen, Ole; Vesborg, Peter Christian Kjærgaard

    2017-01-01

    There have been several works modeling the optimal band gaps for tandem photocatalytic water splitting devices under different assumptions. Due to the many parameters involved, it is impossible for the authors to consider every conceivable situation. In this work, we have developed a web-based mo......There have been several works modeling the optimal band gaps for tandem photocatalytic water splitting devices under different assumptions. Due to the many parameters involved, it is impossible for the authors to consider every conceivable situation. In this work, we have developed a web...... previous experimental photoelectrodes, and quantitatively relates their performance to what would typically be expected via modeling programs....

  5. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  6. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  7. A GIS-based approach for modeling the fate and transport of pollutants in Europe.

    Science.gov (United States)

    Pistocchi, A

    2008-05-15

    This paper presents an approach to estimate chemical concentration in multiple environmental media (soil, water, and the atmosphere) with the sole use of basic geographical information system (GIS) operations and, particularly, map algebra. This allows solving mass balance equations in a different way from the traditional methods involving numerical or analytical solution of systems of equations, producing maps of chemical fluxes and concentrations only through combinations of maps of emissions and environmental removal or transfer rates. Benchmarking with the well-established EMEP MSCE-POP model shows that the method provides consistent results with this more detailed description. When available, experimental evidence equally supports the proposed method in relation to the more complex approaches.

  8. CPS Modeling of CNC Machine Tool Work Processes Using an Instruction-Domain Based Approach

    Directory of Open Access Journals (Sweden)

    Jihong Chen

    2015-06-01

    Full Text Available Building cyber-physical system (CPS models of machine tools is a key technology for intelligent manufacturing. The massive electronic data from a computer numerical control (CNC system during the work processes of a CNC machine tool is the main source of the big data on which a CPS model is established. In this work-process model, a method based on instruction domain is applied to analyze the electronic big data, and a quantitative description of the numerical control (NC processes is built according to the G code of the processes. Utilizing the instruction domain, a work-process CPS model is established on the basis of the accurate, real-time mapping of the manufacturing tasks, resources, and status of the CNC machine tool. Using such models, case studies are conducted on intelligent-machining applications, such as the optimization of NC processing parameters and the health assurance of CNC machine tools.

  9. Wavelet-entropy data pre-processing approach for ANN-based groundwater level modeling

    Science.gov (United States)

    Nourani, Vahid; Alami, Mohammad Taghi; Vousoughi, Farnaz Daneshvar

    2015-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, a Self-Organizing-Map (SOM)-based clustering technique was used to identify spatially homogeneous clusters of groundwater level (GWL) data for a feed-forward neural network (FFNN) to model one and multi-step-ahead GWLs. The wavelet transform (WT) was also used to extract dynamic and multi-scale features of the non-stationary GWL, runoff and rainfall time series. The performance of the FFNN model was compared to the newly proposed combined WT-FFNN model and also the conventional linear forecasting method of ARIMAX (Auto Regressive Integrated Moving Average with exogenous input). GWL predictions were investigated under three different scenarios. The results indicated that the proposed FFNN model coupled with the SOM-based clustering method decreased the dimensionality of the input variables and consequently the complexity of the FFNN models. On the other hand, the application of the wavelet transform to GWL data increased the performance of the FFNN model up to 15.3% in average by revealing the dominant periods of the process.

  10. Physics-Based Correction of Inhomogeneities in Temperature Series: Model Transferability Testing and Comparison to Statistical Approaches

    Science.gov (United States)

    Auchmann, Renate; Brönnimann, Stefan; Croci-Maspoli, Mischa

    2016-04-01

    For the correction of inhomogeneities in sub-daily temperature series, Auchmann and Brönnimann (2012) developed a physics-based model for one specific type of break, i.e. the transition from a Wild screen to a Stevenson screen at one specific station in Basel, Switzerland. The model is based solely on physical considerations, no relationships of the covariates to the differences between the parallel measurements have been investigated. The physics-based model requires detailed information on the screen geometry, the location, and includes a variety of covariates in the model. The model is mainly based on correcting the radiation error, including a modification by ambient wind. In this study we test the application of the model to another station, Zurich, experiencing the same type of transition. Furthermore we compare the performance of the physics based correction to purely statistical correction approaches (constant correction, correcting for annual cycle using spline). In Zurich the Wild screen was replaced in 1954 by the Stevenson screen, from 1954-1960 parallel temperature measurements in both screens were taken, which will be used to assess the performance of the applied corrections. For Zurich the required model input is available (i.e. three times daily observations of wind, cloud cover, pressure and humidity measurements, local times of sunset and sunrise). However, a large number of stations do not measure these additional input data required for the model, which hampers the transferability and applicability of the model to other stations. Hence, we test possible simplifications and generalizations of the model to make it more easily applicable to stations with the same type of inhomogeneity. In a last step we test whether other types of transitions (e.g., from a Stevenson screen to an automated weather system) can be corrected using the principle of a physics-based approach.

  11. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    Science.gov (United States)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  12. Two-dimensional magnetic modeling of ferromagnetic materials by using a neural networks based hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E.; Faba, A. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A.; Lozito, G.M.; Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    This paper presents a hybrid neural network approach to model magnetic hysteresis at macro-magnetic scale. That approach aims to be coupled together with numerical treatments of magnetic hysteresis such as FEM numerical solvers of the Maxwell's equations in time domain, as in case of the non-linear dynamic analysis of electrical machines, and other similar devices, allowing a complete computer simulation with acceptable run times. The proposed Hybrid Neural System consists of four inputs representing the magnetic induction and magnetic field components at each time step and it is trained by 2D and scalar measurements performed on the magnetic material to be modeled. The magnetic induction B is assumed as entry point and the output of the Hybrid Neural System returns the predicted value of the field H at the same time step. Within the Hybrid Neural System, a suitably trained neural network is used for predicting the hysteretic behavior of the material to be modeled. Validations with experimental tests and simulations for symmetric, non-symmetric and minor loops are presented.

  13. First-principle and data-driven model- based approach in rotating machinery failure mode detection

    Directory of Open Access Journals (Sweden)

    G. Wszołek

    2010-12-01

    Full Text Available Purpose: A major concern of modern diagnostics is the use of vibration or acoustic signals generated by a machine to reveal its operating conditions. This paper presents a method which allows to periodically obtain estimates of model eigenvalues represented by complex numbers. The method is intended to diagnose rotating machinery under transient conditions.Design/methodology/approach: The method uses a parametric data-driven model, the parameters of which are estimated using operational data.Findings: Experimental results were obtained with the use of a laboratory single-disc rotor system equipped with both sliding and hydrodynamic bearings. The test rig used allows measurements of data under normal, or reference, and malfunctioning operation, including oil instabilities, rub, looseness and unbalance, to be collected.Research limitations/implications: Numerical and experimental studies performed in order to validate the method are presented in the paper. Moreover, literature and industrial case studies are analyzed to better understand vibration modes of the rotor under abnormal operating conditions. Practical implications: A model of the test rig has been developed to verify the method proposed herein and to understand the results of the experiments. Hardware realization of the proposed method was implemented as a standalone operating module developed using the Texas Instruments TMS3200LF2407 Starter Kit.Originality/value: The parametric approach was proposed instead of nonparametric one towards diagnosing of rotating machinery.

  14. A Substructural Damage Identification Approach for Shear Structure Based on Changes in the First AR Model Coefficient Matrix

    Directory of Open Access Journals (Sweden)

    Liu Mei

    2015-01-01

    Full Text Available A substructural damage identification approach based on changes in the first AR model coefficient matrix is proposed in this paper to identify structural damage including its location and severity. Firstly, a substructure approach is adopted in the procedure to divide a complete structure into several substructures in order to significantly reduce the number of unknown parameters for each substructure so that damage identification processes can be independently conducted on each substructure. To establish a relation between changes in AR model coefficients and structural damage for each substructure, a theoretical derivation is presented. Thus the accelerations are fed into ARMAX models to determine the AR model coefficients for each substructure under undamaged and various damaged conditions, based on which changes in the first AR model coefficient matrix (CFAR is obtained and adopted as the damage indicator for the proposed substructure damage identification approach. To better assess the performance of the proposed procedure, a numerical simulation and an experimental verification of the proposed approach are then carried out and the results show that the proposed procedure can successfully locate and quantify the damage in both simulation and laboratory experiment.

  15. Experimental evaluation of neural, statistical, and model-based approaches to FLIR ATR

    Science.gov (United States)

    Li, Baoxin; Zheng, Qinfen; Der, Sandor Z.; Chellappa, Rama; Nasrabadi, Nasser M.; Chan, Lipchen A.; Wang, LinCheng

    1998-09-01

    This paper presents an empirical evaluation of a number of recently developed Automatic Target Recognition algorithms for Forward-Looking InfraRed (FLIR) imagery using a large database of real second-generation FLIR images. The algorithms evaluated are based on convolution neural networks (CNN), principal component analysis (PCA), linear discriminant analysis (LDA), learning vector quantization (LVQ), and modular neural networks (MNN). Two model-based algorithms, using Hausdorff metric based matching and geometric hashing, are also evaluated. A hierarchial pose estimation system using CNN plus either PCA or LDA, developed by the authors, is also evaluated using the same data set.

  16. Geospatial Modeling and Simulation Based Approach for Developing Commuting patterns of School Children

    Energy Technology Data Exchange (ETDEWEB)

    Bhaduri, Budhendra L [ORNL; Liu, Cheng [ORNL; Nutaro, James J [ORNL; Patterson, Lauren A [ORNL

    2008-01-01

    Numerous socio-environmental studies, including those in public health, utilize population data as one of the essential elements of modeling and analysis. Typically population data are reported by administrative or accounting units. For example, in the US the Census Bureau reports population counts by census blocks, block groups, and tracts. At any resolution, a uniform population distribution is assumed and the population figures and demographic characteristics are typically associated with block (polygon) centroids. In geographic analyses these points are considered representative of the population for census polygons. Traditional spatial modeling approaches commonly include intersection of census data with buffers of influence to quantify target population, using either inclusion-exclusion (of the centroids) or the area weighted population estimation methods. However, it is well understood that uniform population distribution is the weakest assumption and by considering census polygon centroids as representative of population all analytical approaches are very likely to overestimate or underestimate the analytical results. Given that population is spatially restricted by Census accounting units (such as blocks), there often is great uncertainty about spatial distribution of residents within those accounting units. This is particularly appropriate in suburban and rural areas, where the population is dispersed to a greater degree than urban areas. Because of this uncertainty, there is significant potential to misclassify people with respect to their location from pollution sources, and consequently it becomes challenging to determine if certain sub-populations are actually more likely than others to get differential environmental exposure. In this paper, we describe development and utilization of a high resolution demographic data driven approach for modeling and simulation at Oak Ridge National Laboratory.

  17. A probabilistic approach to investigate the effect of wave chorology on process-based morphological modelling

    NARCIS (Netherlands)

    Dastgheib, A.; Rajabalinejad, M.R.; Ranasinghe, R.; Roelvink, D.

    2012-01-01

    This paper demonstrates the sensitivity of morphological process-based models to the chronology of input wave conditions. In this research the effect of an emerged offshore breakwater on the morphology of the beach is investigated. A 30 day long morphological simulation with real time history of the

  18. A study of the diffusion of alternative fuel vehicles : An agent-based modeling approach

    NARCIS (Netherlands)

    Zhang, Ting; Gensler, Sonja; Garcia, Rosanna

    This paper demonstrates the use of an agent-based model (ABM) to investigate factors that can speed the diffusion of eco-innovations, namely alternative fuel vehicles (AFVs). The ABM provides the opportunity to consider the interdependencies inherent between key participants in the automotive

  19. Evidence-Based Approach to Treating Lateral Epicondylitis Using the Occupational Adaptation Model.

    Science.gov (United States)

    Bachman, Stephanie

    2016-01-01

    The occupational therapy Centennial Vision reinforces the importance of informing consumers about the benefit of occupational therapy and continuing to advocate for the unique client-centered role of occupational therapy. Occupational therapy practitioners working in hand therapy have traditionally found it difficult to combine the biomechanical foundations of hand therapy with the fundamental client-centered tenets of occupational therapy. Embracing our historical roots will become more important as health care evolves and third-party payers continue to scrutinize the need for the profession of occupational therapy. This article outlines a client-centered approach for hand therapists for the treatment of lateral epicondylitis using the Occupational Adaptation Model.

  20. An Efficient Approach in Analysis of DNA Base Calling Using Neural Fuzzy Model

    Science.gov (United States)

    2017-01-01

    This paper presented the issues of true representation and a reliable measure for analyzing the DNA base calling is provided. The method implemented dealt with the data set quality in analyzing DNA sequencing, it is investigating solution of the problem of using Neurofuzzy techniques for predicting the confidence value for each base in DNA base calling regarding collecting the data for each base in DNA, and the simulation model of designing the ANFIS contains three subsystems and main system; obtain the three features from the subsystems and in the main system and use the three features to predict the confidence value for each base. This is achieving effective results with high performance in employment. PMID:28261268

  1. Important issues facing model-based approaches to tunneling transport in molecular junctions

    CERN Document Server

    Baldea, Ioan

    2015-01-01

    Extensive studies on thin films indicated a generic cubic current-voltage $I-V$ dependence as a salient feature of charge transport by tunneling. A quick glance at $I-V$ data for molecular junctions suggests a qualitatively similar behavior. This would render model-based studies almost irrelevant, since, whatever the model, its parameters can always be adjusted to fit symmetric (asymmetric) $I-V$ curves characterized by two (three) expansion coefficients. Here, we systematically examine popular models based on tunneling barrier or tight-binding pictures and demonstrate that, for a quantitative description at biases of interest ($V$ slightly higher than the transition voltage $V_t$), cubic expansions do not suffice. A detailed collection of analytical formulae as well as their conditions of applicability are presented to facilitate experimentalists colleagues to process and interpret their experimental data by obtained by measuring currents in molecular junctions. We discuss in detail the limits of applicabili...

  2. Agent-Based Model Approach to Complex Phenomena in Real Economy

    Science.gov (United States)

    Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.

    An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.

  3. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  4. Friendship Network and Dental Brushing Behavior among Middle School Students: An Agent Based Modeling Approach.

    Science.gov (United States)

    Sadeghipour, Maryam; Khoshnevisan, Mohammad Hossein; Jafari, Afshin; Shariatpanahi, Seyed Peyman

    2017-01-01

    By using a standard questionnaire, the level of dental brushing frequency was assessed among 201 adolescent female middle school students in Tehran. The initial assessment was repeated after 5 months, in order to observe the dynamics in dental health behavior level. Logistic Regression model was used to evaluate the correlation among individuals' dental health behavior in their social network. A significant correlation on dental brushing habits was detected among groups of friends. This correlation was further spread over the network within the 5 months period. Moreover, it was identified that the average brushing level was improved within the 5 months period. Given that there was a significant correlation between social network's nodes' in-degree value, and brushing level, it was suggested that the observed improvement was partially due to more popularity of individuals with better tooth brushing habit. Agent Based Modeling (ABM) was used to demonstrate the dynamics of dental brushing frequency within a sample of friendship network. Two models with static and dynamic assumptions for the network structure were proposed. The model with dynamic network structure successfully described the dynamics of dental health behavior. Based on this model, on average, every 43 weeks a student changes her brushing habit due to learning from her friends. Finally, three training scenarios were tested by these models in order to evaluate their effectiveness. When training more popular students, considerable improvement in total students' brushing frequency was demonstrated by simulation results.

  5. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model.

    Science.gov (United States)

    Luck, Jeff; Hagigi, Fred; Parker, Louise E; Yano, Elizabeth M; Rubenstein, Lisa V; Kirchner, JoAnn E

    2009-09-28

    Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  6. Stimulating household flood risk mitigation investments through insurance and subsidies: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Botzen, Wouter; de Moel, Hans; Aerts, Jeroen

    2015-04-01

    In the period 1998-2009, floods triggered roughly 52 billion euro in insured economic losses making floods the most costly natural hazard in Europe. Climate change and socio/economic trends are expected to further aggrevate floods losses in many regions. Research shows that flood risk can be significantly reduced if households install protective measures, and that the implementation of such measures can be stimulated through flood insurance schemes and subsidies. However, the effectiveness of such incentives to stimulate implementation of loss-reducing measures greatly depends on the decision process of individuals and is hardly studied. In our study, we developed an Agent-Based Model that integrates flood damage models, insurance mechanisms, subsidies, and household behaviour models to assess the effectiveness of different economic tools on stimulating households to invest in loss-reducing measures. Since the effectiveness depends on the decision making process of individuals, the study compares different household decision models ranging from standard economic models, to economic models for decision making under risk, to more complex decision models integrating economic models and risk perceptions, opinion dynamics, and the influence of flood experience. The results show the effectiveness of incentives to stimulate investment in loss-reducing measures for different household behavior types, while assuming climate change scenarios. It shows how complex decision models can better reproduce observed real-world behaviour compared to traditional economic models. Furthermore, since flood events are included in the simulations, the results provide an analysis of the dynamics in insured and uninsured losses for households, the costs of reducing risk by implementing loss-reducing measures, the capacity of the insurance market, and the cost of government subsidies under different scenarios. The model has been applied to the City of Rotterdam in The Netherlands.

  7. Availability modeling approach for future circular colliders based on the LHC operation experience

    CERN Document Server

    Niemi, Arto; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo Johannes

    2016-01-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today’s most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10–20 ab−1 of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. T...

  8. Availability modeling approach for future circular colliders based on the LHC operation experience

    CERN Document Server

    AUTHOR|(CDS)2096726; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo Johannes

    2016-01-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today’s most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10–20  ab$^-$$^1$ of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for p...

  9. Concrete Failure Modeling Based on Micromechanical Approach Subjected to Static Loading

    Directory of Open Access Journals (Sweden)

    Endah Wahyuni

    2010-02-01

    Full Text Available In this paper, a micromechanical model based on the Mori-Tanaka method and the spring-layer model is developed to study the stress-strain behavior of concrete. The concrete is modeled as a two-phase composite. And the failure of concrete is categorized as mortar failure and interface failure. The research presents a method for estimating the modulus of concrete under its whole loading process. The proposed micromechanical model owns the good capabilities for predicting the entire response of concrete under uniaxial compression. It is suitable that tensile strain is as the criterion of concrete failure and the prediction of crack direction also fits with experimental phenomenon.

  10. A Bio-Inspired Model-Based Approach for Context-Aware Post-WIMP Tele-Rehabilitation

    Directory of Open Access Journals (Sweden)

    Víctor López-Jaquero

    2016-10-01

    Full Text Available Tele-rehabilitation is one of the main domains where Information and Communication Technologies (ICT have been proven useful to move healthcare from care centers to patients’ home. Moreover, patients, especially those carrying out a physical therapy, cannot use a traditional Window, Icon, Menu, Pointer (WIMP system, but they need to interact in a natural way, that is, there is a need to move from WIMP systems to Post-WIMP ones. Moreover, tele-rehabilitation systems should be developed following the context-aware approach, so that they are able to adapt to the patients’ context to provide them with usable and effective therapies. In this work a model-based approach is presented to assist stakeholders in the development of context-aware Post-WIMP tele-rehabilitation systems. It entails three different models: (i a task model for designing the rehabilitation tasks; (ii a context model to facilitate the adaptation of these tasks to the context; and (iii a bio-inspired presentation model to specify thoroughly how such tasks should be performed by the patients. Our proposal overcomes one of the limitations of the model-based approach for the development of context-aware systems supporting the specification of non-functional requirements. Finally, a case study is used to illustrate how this proposal can be put into practice to design a real world rehabilitation task.

  11. A Bio-Inspired Model-Based Approach for Context-Aware Post-WIMP Tele-Rehabilitation †

    Science.gov (United States)

    López-Jaquero, Víctor; Rodríguez, Arturo C.; Teruel, Miguel A.; Montero, Francisco; Navarro, Elena; Gonzalez, Pascual

    2016-01-01

    Tele-rehabilitation is one of the main domains where Information and Communication Technologies (ICT) have been proven useful to move healthcare from care centers to patients’ home. Moreover, patients, especially those carrying out a physical therapy, cannot use a traditional Window, Icon, Menu, Pointer (WIMP) system, but they need to interact in a natural way, that is, there is a need to move from WIMP systems to Post-WIMP ones. Moreover, tele-rehabilitation systems should be developed following the context-aware approach, so that they are able to adapt to the patients’ context to provide them with usable and effective therapies. In this work a model-based approach is presented to assist stakeholders in the development of context-aware Post-WIMP tele-rehabilitation systems. It entails three different models: (i) a task model for designing the rehabilitation tasks; (ii) a context model to facilitate the adaptation of these tasks to the context; and (iii) a bio-inspired presentation model to specify thoroughly how such tasks should be performed by the patients. Our proposal overcomes one of the limitations of the model-based approach for the development of context-aware systems supporting the specification of non-functional requirements. Finally, a case study is used to illustrate how this proposal can be put into practice to design a real world rehabilitation task. PMID:27754371

  12. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  13. A simple three-dimensional macroscopic root water uptake model based on the hydraulic architecture approach

    Directory of Open Access Journals (Sweden)

    V. Couvreur

    2012-08-01

    Full Text Available Many hydrological models including root water uptake (RWU do not consider the dimension of root system hydraulic architecture (HA because explicitly solving water flow in such a complex system is too time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed an "implicit" model of the root system HA for simulation of RWU distribution (sink term of Richards' equation and plant water stress in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale, or at the plant scale, rather than for each segment of the root system architecture: the standard sink fraction distribution SSF, the root system equivalent conductance Krs and the compensatory RWU conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU, and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a realistic maize root system HA, the implicit model showed to be accurate for predicting RWU distribution and plant collar water potential, with one single set of parameters, in dissimilar water dynamics scenarios. For these scenarios, the computing time of the implicit model was a factor 28 to 214 shorter than that of the explicit one. We also provide a new expression for the effective soil water potential sensed by plants in soils with a heterogeneous water potential distribution, which emerged from the implicit model equations. With the proposed implicit model of the root system HA, new concepts are brought which open avenues towards simple and mechanistic RWU models and water stress functions operational for field scale water dynamics simulation.

  14. A simple three-dimensional macroscopic root water uptake model based on the hydraulic architecture approach

    Directory of Open Access Journals (Sweden)

    V. Couvreur

    2012-04-01

    Full Text Available Many hydrological models including root water uptake (RWU do not consider the dimension of root system hydraulic architecture (HA because explicitly solving water flow in such a complex system is too much time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed an "implicit" model of the root system HA for simulation of RWU distribution (sink term of Richards' equation and plant water stress in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale or at the plant scale rather than for each segment of the root architecture: the standard sink distribution SSD, the root system equivalent conductance Krs and the compensatory conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a realistic maize root system HA, the implicit model showed to be accurate for predicting RWU distribution and plant collar water potential, with one single set of parameters, in contrasted water dynamics scenarios. For these scenarios, the computing time of the implicit model was a factor 28 to 214 shorter than that of the explicit one. We also provide a new expression for the effective soil water potential sensed by plants in soils with a heterogeneous water potential distribution, which emerged from the implicit model equations. With the proposed implicit model of the root system HA, new concepts are brought which open avenues towards simple and process understanding RWU models and water stress functions operational for field scale water dynamics simulation.

  15. Approaches for a 3D assessment of pavement evenness data based on 3D vehicle models

    Directory of Open Access Journals (Sweden)

    Andreas Ueckermann

    2015-04-01

    Full Text Available Pavements are 3D in their shape. They can be captured in three dimensions by modern road mapping equipment which allows for the assessment of pavement evenness in a more holistic way as opposed to current practice which divides into longitudinal and transversal evenness. It makes sense to use 3D vehicle models to simulate the effects of 3D surface data on certain functional criteria like pavement loading, cargo loading and driving comfort. In order to evaluate the three criteria mentioned two vehicle models have been created: a passenger car used to assess driving comfort and a truck-semitrailer submodel used to assess pavement and cargo loading. The vehicle models and their application to 3D surface data are presented. The results are well in line with existing single-track (planar models. Their advantage over existing 1D/2D models is demonstrated by the example of driving comfort evaluation. Existing “geometric” limit values for the assessment of longitudinal evenness in terms of the power spectral density could be used to establish corresponding limit values for the dynamic response, i.e. driving comfort, pavement loading and cargo loading. The limit values are well in line with existing limit values based on planar vehicle models. They can be used as guidelines for the proposal of future limit values. The investigations show that the use of 3D vehicle models is an appropriate and meaningful way of assessing 3D evenness data gathered by modern road mapping systems.

  16. A Taxonomy-Based Approach to Shed Light on the Babel of Mathematical Models for Rice Simulation

    Science.gov (United States)

    Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Francoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel; Ruane, Alex C.

    2016-01-01

    For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary similarity measure and the unweighted pair-group method with arithmetic mean. Principal component analysis was performed on model outputs at four sites. Results indicated that (i) differences in structure often resulted in similar predictions and (ii) similar structures can lead to large differences in model outputs. User subjectivity during calibration may have hidden expected relationships between model structure and behaviour. This explanation, if confirmed, highlights the need for shared protocols to reduce the degrees of freedom during calibration, and to limit, in turn, the risk that user subjectivity influences model performance.

  17. A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models

    Directory of Open Access Journals (Sweden)

    Qiugang Lu

    2013-01-01

    Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.

  18. MODELING OF INVESTMENT STRATEGIES IN STOCKS MARKETS: AN APPROACH FROM MULTI AGENT BASED SIMULATION AND FUZZY LOGIC

    Directory of Open Access Journals (Sweden)

    ALEJANDRO ESCOBAR

    2010-01-01

    Full Text Available This paper presents a simulation model of a complex system, in this case a financial market, using a MultiAgent Based Simulation approach. Such model takes into account microlevel aspects like the Continuous Double Auction mechanism, which is widely used within stock markets, as well as investor agents reasoning who participate looking for profits. To model such reasoning several variables were considered including general stocks information like profitability and volatility, but also some agent's aspects like their risk tendency. All these variables are incorporated throughout a fuzzy logic approach trying to represent in a faithful manner the kind of reasoning that nonexpert investors have, including a stochastic component in order to model human factors.

  19. Risk Evaluation Approach and Application Research on Fuzzy-FMECA Method Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Zhengjie Xu

    2013-09-01

    Full Text Available In order to safeguard the safety of passengers and reducemaintenance costs, it is necessary to analyze and evaluate the security risk ofthe Railway Signal System. However, the conventional Fuzzy Analytical HierarchyProcess (FAHP can not describe the fuzziness and randomness of the judgment,accurately, and once the fuzzy sets are described using subjection degreefunction, the concept of fuzziness will be no longer fuzzy. Thus Fuzzy-FMECAmethod based on cloud model is put forward. Failure Modes Effects andCriticality Analysis (FMECA method is used to identify the risk and FAHP basedon cloud model is used for determining the subjection degree function in fuzzymethod, finally the group decision can be gained with the syntheticallyaggregated cloud model, the method’s feasibility and effectiveness are shown inthe practical examples. Finally Fuzzy-FMECA based on cloud model and theconventional FAHP are used to assess the risk respectively, evaluation resultsshow that the cloud model which is introduced into the risk assessment ofRailway Signal System can realize the transition between precise value andquality value by combining the fuzziness and randomness and provide moreabundant information than subjection degree function of the conventional FAHP.

  20. Coordination control of behavior-based distributed networked robotic systems: a state modeling approach

    Science.gov (United States)

    Kuppan Chetty, R. M.; Singaperumal, M.; Nagarajan, T.

    2007-12-01

    The coordinated motion of group of autonomous mobile robots for the achievement of goal has been of high interest since the last decade. Previous research works have revealed that one of the essential problems in the area is to plan, navigate and coordinate the motion of robots, avoiding obstacles as well as each other while still achieving the goal. In this paper, Behavior Based approach for the control of distributed networked robotic system, concentrated towards the navigation, planning and coordination between them in unknown complex environment is addressed. A layered behavior based control architecture, with the basic behaviors of Message passing, Obstacle avoidance, Safe wandering and Pit sensing have been designed and assigned to the individual robotic systems to form a navigation algorithm. Validation of this guidance algorithm is carried out through simulations using SIMULINK/State flow.

  1. Tracking control of nonlinear lumped mechanical continuous-time systems: A model-based iterative learning approach

    Science.gov (United States)

    Smolders, K.; Volckaert, M.; Swevers, J.

    2008-11-01

    This paper presents a nonlinear model-based iterative learning control procedure to achieve accurate tracking control for nonlinear lumped mechanical continuous-time systems. The model structure used in this iterative learning control procedure is new and combines a linear state space model and a nonlinear feature space transformation. An intuitive two-step iterative algorithm to identify the model parameters is presented. It alternates between the estimation of the linear and the nonlinear model part. It is assumed that besides the input and output signals also the full state vector of the system is available for identification. A measurement and signal processing procedure to estimate these signals for lumped mechanical systems is presented. The iterative learning control procedure relies on the calculation of the input that generates a given model output, so-called offline model inversion. A new offline nonlinear model inversion method for continuous-time, nonlinear time-invariant, state space models based on Newton's method is presented and applied to the new model structure. This model inversion method is not restricted to minimum phase models. It requires only calculation of the first order derivatives of the state space model and is applicable to multivariable models. For periodic reference signals the method yields a compact implementation in the frequency domain. Moreover it is shown that a bandwidth can be specified up to which learning is allowed when using this inversion method in the iterative learning control procedure. Experimental results for a nonlinear single-input-single-output system corresponding to a quarter car on a hydraulic test rig are presented. It is shown that the new nonlinear approach outperforms the linear iterative learning control approach which is currently used in the automotive industry on durability test rigs.

  2. Modeling anti-HIV compounds: the role of analogue-based approaches.

    Science.gov (United States)

    Srivastava, Hemant Kumar; Bohari, Mohammed H; Sastry, G Narahari

    2012-09-01

    There has been a tremendous progress in the development of anti-HIV therapies since the discovery of the HIV virus. Computer aided drug design in general and analogue-based approaches in particular have played an important role in the process of HIV drug discovery. Structure-based approaches also have played a vital role in this process. There are a large number of studies reported in the literature where QSAR methodology was employed to study the structural requirements for inhibition against various HIV targets like reverse transcriptase, protease, entry and integrase. The current review focuses on those studies and provides a detailed description on the QSAR methodology, descriptors, statistical significance and important findings. This review categorizes the reported QSAR studies on the basis of chemical scaffolds against a particular target. In reverse transcriptase category, QSAR studies on HEPT, TIBO, DABO, DAPY, DATA, AASBN, pyridone and DATZD derivatives have been reviewed. Cyclic urea, fullerene, AHPBA and dihydropyrone derivatives were considered in protease inhibitors category. In addition, QSAR studies on styrylquinoline, carboxylic acid, MBSA and chalcone derivatives were reviewed in integrase inhibitors category. QSAR studies on entry inhibitors like piperidine, benzyl piperidine, benzyl pyrazole, pyrrole and diazepane urea have also been reviewed.

  3. Optimizing the Clinical Use of Carvedilol in Liver Cirrhosis Using a Physiologically Based Pharmacokinetic Modeling Approach.

    Science.gov (United States)

    Rasool, Muhammad Fawad; Khalil, Feras; Läer, Stephanie

    2017-06-01

    Liver cirrhosis is a complex pathophysiological condition that can affect the pharmacokinetics (PK) and hereby dosing of administered drugs. The physiologically based pharmacokinetic (PBPK) models are a valuable tool to explore PK of drugs in cirrhosis patients. The objective of this study was to develop and evaluate a PBPK-carvedilol-cirrhosis model with the available clinical data in liver cirrhosis patients and to recommend model-based drug dosing after exploring the underlying differences in unbound and total (bound and unbound) systemic carvedilol concentrations with the different disease stages. A whole body PBPK model was developed using the population-based PBPK simulator, Simcyp(®). After model development and evaluation in healthy adults, system parameters were modified according to the pathophysiological changes that occur in liver cirrhosis, and predictions were compared to available experimental data from liver cirrhosis Child-Pugh [CP]-C patients. A two-fold error range for the observed/predicted ratios (ratioObs/Pred) of the pharmacokinetic parameters was used for model evaluation. Simulations were then extended to cirrhosis CP-A and CP-B populations were no experimental data that are available to explore changes in drug disposition in these patients. Finally, drug unbound and total (bound and unbound) exposure were predicted in cirrhotic patients of different disease severity, and the results were compared to those of healthy adults. The developed model has successfully described carvedilol PK in healthy and cirrhosis CP-C patients. The model predictions showed that, there was an ~13-fold increase in unbound and ~7-fold increase in total (bound and unbound) systemic exposure of carvedilol between healthy and CP-C populations. To have comparable predicted unbound drug exposure in cirrhosis CP-A, CP-B, and CP-C populations as in healthy subjects receiving a dose of 25 mg, reductions of administered doses to 9.375 mg in CP-A, 4.68 mg in CP-B, and 2

  4. The Hunt Opinion Model-An Agent Based Approach to Recurring Fashion Cycles.

    Science.gov (United States)

    Apriasz, Rafał; Krueger, Tyll; Marcjasz, Grzegorz; Sznajd-Weron, Katarzyna

    2016-01-01

    We study a simple agent-based model of the recurring fashion cycles in the society that consists of two interacting communities: "snobs" and "followers" (or "opinion hunters", hence the name of the model). Followers conform to all other individuals, whereas snobs conform only to their own group and anticonform to the other. The model allows to examine the role of the social structure, i.e. the influence of the number of inter-links between the two communities, as well as the role of the stability of links. The latter is accomplished by considering two versions of the same model-quenched (parameterized by fraction L of fixed inter-links) and annealed (parameterized by probability p that a given inter-link exists). Using Monte Carlo simulations and analytical treatment (the latter only for the annealed model), we show that there is a critical fraction of inter-links, above which recurring cycles occur. For p ≤ 0.5 we derive a relation between parameters L and p that allows to compare both models and show that the critical value of inter-connections, p*, is the same for both versions of the model (annealed and quenched) but the period of a fashion cycle is shorter for the quenched model. Near the critical point, the cycles are irregular and a change of fashion is difficult to predict. For the annealed model we also provide a deeper theoretical analysis. We conjecture on topological grounds that the so-called saddle node heteroclinic bifurcation appears at p*. For p ≥ 0.5 we show analytically the existence of the second critical value of p, for which the system undergoes Hopf's bifurcation.

  5. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2017-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  6. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    Science.gov (United States)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  7. An Entropy-Based Approach to Path Analysis of Structural Generalized Linear Models: A Basic Idea

    Directory of Open Access Journals (Sweden)

    Nobuoki Eshima

    2015-07-01

    Full Text Available A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method.

  8. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    Directory of Open Access Journals (Sweden)

    Santiago-Omar Caballero-Morales

    2013-01-01

    Full Text Available An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR system was built with Hidden Markov Models (HMMs, where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness. Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR’s output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech.

  9. Recognition of emotions in Mexican Spanish speech: an approach based on acoustic modelling of emotion-specific vowels.

    Science.gov (United States)

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87-100% was achieved for the recognition of emotional state of Mexican Spanish speech.

  10. Skin Detection Based on Color Model and Low Level Features Combined with Explicit Region and Parametric Approaches

    Directory of Open Access Journals (Sweden)

    HARPREET KAUR SAINI

    2014-10-01

    Full Text Available Skin detection is active research area in the field of computer vision which can be applied in the application of face detection, eye detection, etc. These detection helps in various applications such as driver fatigue monitoring system, surveillance system etc. In Computer vision applications, the color model and representations of the human image in color model is one of major module to detect the skin pixels. The mainstream technology is based on the individual pixels and selection of the pixels to detect the skin part in the whole image. In this thesis implementation, we presents a novel technique for skin color detection incorporating with explicit region based and parametric based approach which gives the better efficiency and performances in terms of skin detection in human images. Color models and image quantization technique is used to extract the regions of the images and to represent the image in a particular color model such as RGB and HSV, and then the parametric based approach is applied by selecting the low level skin features are applied to extract the skin and non-skin pixels of the images. In the first step, our technique uses the state-of-the-art non-parametric approach which we call the template based technique or explicitly defined skin regions technique. Then the low level features of the human skin are being extracted such as edge, corner detection which is also known as parametric method. The experimental results depict the improvement in detection rate of the skin pixels by this novel approach. And in the end we discuss the experimental results to prove the algorithmic improvements.

  11. New approach to assess bioequivalence parameters using generalized gamma mixed-effect model (model-based asymptotic bioequivalence test).

    Science.gov (United States)

    Chen, Yuh-Ing; Huang, Chi-Shen

    2014-02-28

    In the pharmacokinetic (PK) study under a 2x2 crossover design that involves both the test and reference drugs, we propose a mixed-effects model for the drug concentration-time profiles obtained from subjects who receive different drugs at different periods. In the proposed model, the drug concentrations repeatedly measured from the same subject at different time points are distributed according to a multivariate generalized gamma distribution, and the drug concentration-time profiles are described by a compartmental PK model with between-subject and within-subject variations. We then suggest a bioequivalence test based on the estimated bioavailability parameters in the proposed mixed-effects model. The results of a Monte Carlo study further show that the proposed model-based bioequivalence test is not only better on maintaining its level but also more powerful for detecting the bioequivalence of the two drugs than the conventional bioequivalence test based on a non-compartmental analysis or the one based on a mixed-effects model with a normal error variable. The application of the proposed model and test is finally illustrated by using data sets in two PK studies.

  12. Simulating Transport and Land Use Interdependencies for Strategic Urban Planning—An Agent Based Modelling Approach

    Directory of Open Access Journals (Sweden)

    Nam Huynh

    2015-10-01

    Full Text Available Agent based modelling has been widely accepted as a promising tool for urban planning purposes thanks to its capability to provide sophisticated insights into the social behaviours and the interdependencies that characterise urban systems. In this paper, we report on an agent based model, called TransMob, which explicitly simulates the mutual dynamics between demographic evolution, transport demands, housing needs and the eventual change in the average satisfaction of the residents of an urban area. The ability to reproduce such dynamics is a unique feature that has not been found in many of the like agent based models in the literature. TransMob, is constituted by six major modules: synthetic population, perceived liveability, travel diary assignment, traffic micro-simulator, residential location choice, and travel mode choice. TransMob is used to simulate the dynamics of a metropolitan area in South East of Sydney, Australia, in 2006 and 2011, with demographic evolution. The results are favourably compared against survey data for the area in 2011, therefore validating the capability of TransMob to reproduce the observed complexity of an urban area. We also report on the application of TransMob to simulate various hypothetical scenarios of urban planning policies. We conclude with discussions on current limitations of TransMob, which serve as suggestions for future developments.

  13. A robotics-based approach to modeling of choice reaching experiments on visual attention

    Directory of Open Access Journals (Sweden)

    Soeren eStrauss

    2012-04-01

    Full Text Available The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements towards a target in an odd-colour search task, i.e. reaching for a green square among red squares and vice versa (e.g. Song & Nakayama, 2008. Interestingly these studies found that in a high number of trials movements were initially directed towards a distractor and only later were adjusted towards the target. These curved trajectories occurred particularly frequently when the target in the directly preceding trial had a different colour (priming effect. Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke & Backhaus, 2011; Heinke & Humphreys, 2003. To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen & Schoener, 2002. The robot arm is able to mimic the results of the odd-colour search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper.

  14. A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention

    Science.gov (United States)

    Strauss, Soeren; Heinke, Dietmar

    2012-01-01

    The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827

  15. U-10Mo/Zr Interface Modeling using a Microstructure-Based FEM Approach

    Energy Technology Data Exchange (ETDEWEB)

    Soulami, Ayoub [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Zhijie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Joshi, Vineet V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Burkes, Douglas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lavender, Curt A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGarrah, Eric J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-04-25

    The U-10Mo in low enrichments (LEU) has been identified as the most promising alternative to the current highly enriched uranium (HEU) used in the United States’ fleet of high performance research reactors (USHPRRs). The nominal configuration of the new LEU U-10Mo plate fuel comprises a U-10Mo fuel foil enriched to slightly less than 20% U-235 (0.08” to 0.02” thick), a thin Zr interlayer/diffusion barrier (25 m thick) and a relatively thick outer can of 6061 aluminum. Currently the Zr interlayer is clad by hot roll bonding. Previous studies and observations revealed a thinning of the zirconium (Zr) layer during this fuel fabrication process, which is not desirable from the fuel performance perspective. Coarse UMo grains, dendritic structures, Mo concentration segregation, carbides, and porosity are present in the as-cast material and can lead to a nonuniform UMo/Zr interface. The purpose of the current work is to investigate the effects of these microstructural parameters on the Zr coating variation. A microstructure-based finite-element method model was used in this work, and a study on the effect of homogenization on the interface between U-10Mo and Zr was conducted. The model uses actual backscattered electron–scanning electron microscopy microstructures, Mo concentrations, and mechanical properties to predict the behavior of a representative volume element under compressive loading during the rolling process. The model successfully predicted the experimentally observed thinning of the Zr layer in the as-cast material. The model also uses results from a homogenization model as an input, and a study on the effect of different levels of homogenization on the interface indicated that homogenization helps decrease this thinning. This model can be considered a predictive tool representing a first step for model integration and an input into a larger fuel fabrication performance model.

  16. An Integrated Model for Simulating Regional Water Resources Based on Total Evapotranspiration Control Approach

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-01-01

    Full Text Available Total evapotranspiration and water consumption (ET control is considered an efficient method for water management. In this study, we developed a water allocation and simulation (WAS model, which can simulate the water cycle and output different ET values for natural and artificial water use, such as crop evapotranspiration, grass evapotranspiration, forest evapotranspiration, living water consumption, and industry water consumption. In the calibration and validation periods, a “piece-by-piece” approach was used to evaluate the model from runoff to ET data, including the remote sensing ET data and regional measured ET data, which differ from the data from the traditional hydrology method. We applied the model to Tianjin City, China. The Nash-Sutcliffe efficiency (Ens of the runoff simulation was 0.82, and its regression coefficient R2 was 0.92. The Nash-Sutcliffe Efficiency (Ens of regional total ET simulation was 0.93, and its regression coefficient R2 was 0.98. These results demonstrate that ET of irrigation lands is the dominant part, which accounts for 53% of the total ET. The latter is also a priority in ET control for water management.

  17. Approach to Modeling and Virtual-reality-based Simulation for Plant Canopy Lighting

    Institute of Scientific and Technical Information of China (English)

    ZHAO Kai; SONG Fengbin; WANG Haopeng

    2008-01-01

    Over the past 20 years, significant progress has been made in virtual plant modeling corresponding to the rapid advances in information technology. Virtual plant research has broad applications in agronomy, forestry, ecol-ogy and remote sensing. As many biological processes are driven by light, it is the key for virtual plant to estimate the light absorbed by each organ. This paper presents the radiance equation suitable for calculating sun and sky light in-tercepted by plant organs based on the principles of the interaction between light and plant canopy firstly; analyzes the process principles of plant canopy primary lighting based on ray casting and projection secondly; describes the multiple scattering of plant lighting based on Monte Carlo ray tracing method and on the radiosity method thirdly; and confirms the research with 3D visualization based on Virtual Reality Modeling Language (VRML) finally. The research is the primary work of digital agriculture, and important for monitoring and estimating corn growth in Northeast China.

  18. A novel model-based approach for dose determination of glycopyrronium bromide in COPD

    Directory of Open Access Journals (Sweden)

    Arievich Helen

    2012-12-01

    Full Text Available Abstract Background Glycopyrronium bromide (NVA237 is an inhaled long-acting muscarinic antagonist in development for treatment of COPD. This study compared the efficacy and safety of once-daily (OD and twice-daily (BID glycopyrronium bromide regimens, using a novel model-based approach, in patients with moderate-to-severe COPD. Methods Double-blind, randomized, dose-finding trial with an eight-treatment, two-period, balanced incomplete block design. Patients (smoking history ≥10 pack-years, post-bronchodilator FEV1 ≥30% and 1/FVC 1 at Day 28. Results 385 patients (mean age 61.2 years; mean post-bronchodilator FEV1 53% predicted were randomized; 88.6% completed. All OD and BID dosing regimens produced dose-dependent bronchodilation; at Day 28, increases in mean trough FEV1 versus placebo were statistically significant for all regimens, ranging from 51 mL (glycopyrronium bromide 12.5 μg OD to 160 mL (glycopyrronium bromide 50 μg BID. Pharmacodynamic steady-state was reached by Day 7. There was a small separation (≤37 mL between BID and OD dose–response curves for mean trough FEV1 at steady-state in favour of BID dosing. Over 24 hours, separation between OD and BID regimens was even smaller (FEV1 AUC0-24h maximum difference for equivalent daily dose regimens: 8 mL. Dose–response results for FEV1 at 12 hours, FEV1 AUC0-12h and FEV1 AUC0-4h at steady-state showed OD regimens provided greater improvement over placebo than BID regimens for total daily doses of 25 μg, 50 μg and 100 μg, while the reverse was true for OD versus BID regimens from 12–24 hours. The 12.5 μg BID dose produced a marginally higher improvement in trough FEV1 versus placebo than 50 μg OD, however, the response at 12 hours over placebo was suboptimal (74 mL. Glycopyrronium bromide was safe and well tolerated at all doses. Conclusions Glycopyrronium bromide 50 μg OD provides significant bronchodilation over a 24 hour period

  19. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  20. Modelling of human exposure to air pollution in the urban environment: a GPS-based approach.

    Science.gov (United States)

    Dias, Daniela; Tchepel, Oxana

    2014-03-01

    The main objective of this work was the development of a new modelling tool for quantification of human exposure to traffic-related air pollution within distinct microenvironments by using a novel approach for trajectory analysis of the individuals. For this purpose, mobile phones with Global Positioning System technology have been used to collect daily trajectories of the individuals with higher temporal resolution and a trajectory data mining, and geo-spatial analysis algorithm was developed and implemented within a Geographical Information System to obtain time-activity patterns. These data were combined with air pollutant concentrations estimated for several microenvironments. In addition to outdoor, pollutant concentrations in distinct indoor microenvironments are characterised using a probabilistic approach. An example of the application for PM2.5 is presented and discussed. The results obtained for daily average individual exposure correspond to a mean value of 10.6 and 6.0-16.4 μg m(-3) in terms of 5th-95th percentiles. Analysis of the results shows that the use of point air quality measurements for exposure assessment will not explain the intra- and inter-variability of individuals' exposure levels. The methodology developed and implemented in this work provides time-sequence of the exposure events thus making possible association of the exposure with the individual activities and delivers main statistics on individual's air pollution exposure with high spatio-temporal resolution.

  1. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    with a relativist approach. Arriving at the design of an ISD methodology required the combination of previous theoretical results with the observations from the case study. The case study showed some of the key elements to be integrated in the methodology. Firstly, plans and models are subject of a high degree......This dissertation presents the results of a three-year long case study of an information systems development project where a scheduling and control system was developed for a manufacturing company. The project goal was to test the feasibility of a new technology called advanced mathematical...... organizations that are both distributed and loosely coupled. Given the current trends towards telecommuting and international mergers, the development project presented a setting for research that was addressing both a theoretical hole and also pressing practical needs. In order to achieve this goal I had...

  2. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel.

    Science.gov (United States)

    Li, Xianfeng; Murthy, N Sanjeeva; Becker, Matthew L; Latour, Robert A

    2016-06-24

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications.

  3. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  4. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  5. A knowledge based approach to matching human neurodegenerative disease and animal models

    Directory of Open Access Journals (Sweden)

    Maryann E Martone

    2013-05-01

    Full Text Available Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology and an associated Phenotype Knowledge Base using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework and qualities are drawn from the Phenotype and Trait Ontology. We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  6. A knowledge based approach to matching human neurodegenerative disease and animal models

    Science.gov (United States)

    Maynard, Sarah M.; Mungall, Christopher J.; Lewis, Suzanna E.; Imam, Fahim T.; Martone, Maryann E.

    2013-01-01

    Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology (NDPO) and an associated Phenotype Knowledge Base (PKB) using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework (NIF) and qualities are drawn from the Phenotype and Trait Ontology (PATO). We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  7. A physically based approach to model LAI from MODIS 250 m data in a tropical region

    Science.gov (United States)

    Propastin, Pavel; Erasmi, Stefan

    2010-02-01

    A time series of leaf area index (LAI) has been developed based on 16-day normalized difference vegetation index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m resolution (MOD250_LAI). The MOD250_LAI product uses a physical radiative transfer model which establishes a relationship between LAI, fraction of vegetation cover (FVC) and given patterns of surface reflectance, view-illumination conditions and optical properties of vegetation. In situ measurements of LAI and FVC made at 166 plots using hemispherical photography served for calibration of model parameters and validation of modelling results. Optical properties of vegetation cover, summarized by the light extinction coefficient, were computed at the local (pixel) level based on empirical models between ground-measured tree crown architecture at 85 sampling plots and spectral values in Landsat ETM+ bands. Influence of view-illumination conditions on optical properties of canopy was simulated by a view angle geometry model incorporating the solar zenith angle and the sensor viewing angle. The results revealed high compatibility of the produced MOD250_LAI data set with ground truth information and the 30 m resolution Landsat ETM+ LAI estimated using the similar algorithm. The produced MOD250_LAI was also compared with the global MODIS 1000-m LAI product (MOD15A2 LAI). Results show good consistency of the spatial distribution and temporal dynamics between the two LAI products. However, the results also showed that the annual LAI amplitude by the MOD15A2 product is significantly higher than by the MOD250_LAI. This higher amplitude is caused by a considerable underestimation of the tropical rainforest LAI by the MOD15A2 during the seasonal phases of low leaf production.

  8. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    Science.gov (United States)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  9. Modeling Turkish M2 broad money demand: a portfolio-based approach using implications for monetary policy

    OpenAIRE

    Levent, Korap

    2008-01-01

    In this paper, a money demand model upon M2 broad monetary aggregate for the Turkish economy is examined in a portfolio-based approach considering various alternative cost measures to hold money. Employing multivariate co-integration methodology of the same order integrated variables, our estimation results indicate that there exists a theoretically plausible co-integrating vector in the long-run money demand variable space. The main alternative costs to demand for money are found as the depr...

  10. Design and Simulation of Mathematical Models- An DSP and Digital Communication Based Approach

    Directory of Open Access Journals (Sweden)

    Miss. Shruti R. Tambakhe

    2014-04-01

    Full Text Available A methodology for implementing DSP based or communication applications on a field programmable gate arrays (FPGA using Xilinx System Generator (XSG for Matlab. The DPSK system and FFT are simulated using Matlab/ Simulink environment and System Generator, a tool from Xilinx used for FPGA design as well as implemented on Spartan 3E Starter Kit boards. We proposed the concept of simulation of mathematical model on mixed HDL-Simulink using Xilinx system generator. Many applications that are DSP based or certain communication application require mathematical modelling for their easy understanding and analysis. Due to its complexity Pure HDL is unable to simulate. Also it terms to be costly and time consuming process. The implementation of FFT algorithms that can compute fourier transform of varied signals in real time for frequency analysis of signals on FPGAs (Spartan3E. With large demand for high dynamic range for applications, floating point implementation is used as fixed point implementation becomes increasingly expensive. The DPSK model are first board behaves as a modulator and the second as a demodulator. The modulator and demodulator algorithms have been implemented on FPGA using the VHDL language on Xilinx ISE.

  11. Drought prediction using a wavelet based approach to model the temporal consequences of different types of droughts

    Science.gov (United States)

    Maity, Rajib; Suman, Mayank; Verma, Nitesh Kumar

    2016-08-01

    Droughts are expected to propagate from one type to another - meteorological to agricultural to hydrological to socio-economic. However, they do not possess a universal, straightforward temporal dependence. Rather, assessment of one type of drought (successor) from another (predecessor) is a complex problem depending on the basin's physiographic and climatic characteristics, such as, spatial extent, topography, land use, land cover and climate regime. In this paper, a wavelet decomposition based approach is proposed to model the temporal dependence between different types of droughts. The idea behind is to separate the rapidly and slowly moving components of drought indices. It is shown that the temporal dependence of predecessor (say meteorological drought) on the successor (say hydrological drought) can be better captured at its constituting components level. Such components are obtained through wavelet decomposition retaining its temporal correspondence. Thus, in the proposed approach, predictand drought index is predicted using the decomposed components of predecessor drought. Several alternative models are investigated to arrive at the best possible model structure for predicting different types of drought. The proposed approach is found to be very useful for foreseeing the agricultural or hydrological droughts knowing the meteorological drought status, offering the scope for better management of drought consequences. The mathematical framework of the proposed approach is general in nature and can be applied to different basins. However, the limitation is the requirement of region/catchment specific calibration of some parameters before using the proposed model, which is not very difficult and uncommon though.

  12. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  13. A model-based approach to the spatial and spectral calibration of NIRSpec onboard JWST

    Science.gov (United States)

    Dorner, B.; Giardino, G.; Ferruit, P.; Alves de Oliveira, C.; Birkmann, S. M.; Böker, T.; De Marchi, G.; Gnata, X.; Köhler, J.; Sirianni, M.; Jakobsen, P.

    2016-08-01

    Context. The NIRSpec instrument for the James Webb Space Telescope (JWST) can be operated in multiobject spectroscopy (MOS), long-slit, and integral field unit (IFU) mode with spectral resolutions from 100 to 2700. Its MOS mode uses about a quarter of a million individually addressable minislits for object selection, covering a field of view of ~9 arcmin2. Aims: The pipeline used to extract wavelength-calibrated spectra from NIRSpec detector images relies heavily on a model of NIRSpec optical geometry. We demonstrate how dedicated calibration data from a small subset of NIRSpec modes and apertures can be used to optimize this parametric model to the necessary levels of fidelity. Methods: Following an iterative procedure, the initial fiducial values of the model parameters are manually adjusted and then automatically optimized, so that the model predicted location of the images and spectral lines from the fixed slits, the IFU, and a small subset of the MOS apertures matches their measured location in the main optical planes of the instrument. Results: The NIRSpec parametric model is able to reproduce the spatial and spectral position of the input spectra with high fidelity. The intrinsic accuracy (1-sigma, rms) of the model, as measured from the extracted calibration spectra, is better than 1/10 of a pixel along the spatial direction and better than 1/20 of a resolution element in the spectral direction for all of the grating-based spectral modes. This is fully consistent with the corresponding allocation in the spatial and spectral calibration budgets of NIRSpec.

  14. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    Science.gov (United States)

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  15. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  16. An agent-based approach to modelling the effects of extreme events on global food prices

    Science.gov (United States)

    Schewe, Jacob; Otto, Christian; Frieler, Katja

    2015-04-01

    Extreme climate events such as droughts or heat waves affect agricultural production in major food producing regions and therefore can influence the price of staple foods on the world market. There is evidence that recent dramatic spikes in grain prices were at least partly triggered by actual and/or expected supply shortages. The reaction of the market to supply changes is however highly nonlinear and depends on complex and interlinked processes such as warehousing, speculation, and export restrictions. Here we present for the first time an agent-based modelling framework that accounts, in simplified terms, for these processes and allows to estimate the reaction of world food prices to supply shocks on a short (monthly) timescale. We test the basic model using observed historical supply, demand, and price data of wheat as a major food grain. Further, we illustrate how the model can be used in conjunction with biophysical crop models to assess the effect of future changes in extreme event regimes on the volatility of food prices. In particular, the explicit representation of storage dynamics makes it possible to investigate the potentially nonlinear interaction between simultaneous extreme events in different food producing regions, or between several consecutive events in the same region, which may both occur more frequently under future global warming.

  17. Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling

    Science.gov (United States)

    Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.

    2017-07-01

    With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.

  18. The SPH approach to the process of container filling based on non-linear constitutive models

    Institute of Scientific and Technical Information of China (English)

    Tao Jiang; Jie Ouyang; Lin Zhang; Jin-Lian Ren

    2012-01-01

    In this work,the transient free surface of container filling with non-linear constitutive equation's fluids is numerically investigated by the smoothed particle hydrodynamics (SPH) method.Specifically,the filling process of a square container is considered for non-linear polymer fluids based on the Cross model.The validity of the presented SPH is first verified by solving the Newtonian fluid and OldroydB fluid jet.Various phenomena in the filling process are shown,including the jet buckling,jet thinning,splashing or spluttering,steady filling.Moreover,a new phenomenon of vortex whirling is more evidently observed for the Cross model fluid compared with the Newtonian fluid case.

  19. Modeling and verifying Web services driven by requirements: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    HOU Lishan; JIN ZHi; WU Budan

    2006-01-01

    Automatic discovery and composition of Web services is an important research area in Web service technology, in which the specification of Web services is a key issue. This paper presents a Web service capability description framework based on the environment ontology. This framework depicts Web services capability in two aspects:the operable environment and the environment changes resulting from behaviors of the Web service. On the basis of the framework, a requirement-driven Web service composition model has been constructed. This paper brings forward the formalization of Web service interactions with π calculus. And an automatic mechanism converting conceptual capability description to the formal process expression has been built. This kind of formal specification assists in verifying whether the composite Web service model matches the requirement.

  20. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  1. A Novel Approach for Modeling and Simulation of Helix Twisting Structure Based on Mass-Spring Model

    Directory of Open Access Journals (Sweden)

    Zhongbin Wang

    2013-01-01

    Full Text Available In order to improve the modeling efficiency and realize the deformation simulation of helix twisting structure, a computer-aided design system based on mass-spring model is developed. The geometry structure of helix twisting structure is presented and mass-spring model is applied in the deformation simulation. Moreover, the key technologies such as coordinate mapping, system render and system architecture are elaborated. Finally, a prototype system is developed with Visual C++ and OpenGL, and the proposed system is proved efficient through a comparison experiment.

  2. Sensory neural pathways revisited to unravel the temporal dynamics of the Simon effect: A model-based cognitive neuroscience approach.

    Science.gov (United States)

    Salzer, Yael; de Hollander, Gilles; Forstmann, Birte U

    2017-02-24

    The Simon task is one of the most prominent interference tasks and has been extensively studied in experimental psychology and cognitive neuroscience. Despite years of research, the underlying mechanism driving the phenomenon and its temporal dynamics are still disputed. Within the framework of the review, we adopt a model-based cognitive neuroscience approach. We first go over key findings in the literature of the Simon task, discuss competing qualitative cognitive theories and the difficulty of testing them empirically. We then introduce sequential sampling models, a particular class of mathematical cognitive process models. Finally, we argue that the brain architecture accountable for the processing of spatial ('where') and non-spatial ('what') information, could constrain these models. We conclude that there is a clear need to bridge neural and behavioral measures, and that mathematical cognitive models may facilitate the construction of this bridge and work towards revealing the underlying mechanisms of the Simon effect.

  3. A computational model of the lexical-semantic system based on a grounded cognition approach.

    Science.gov (United States)

    Ursino, Mauro; Cuppini, Cristiano; Magosso, Elisa

    2010-01-01

    This work presents a connectionist model of the semantic-lexical system based on grounded cognition. The model assumes that the lexical and semantic aspects of language are memorized in two distinct stores. The semantic properties of objects are represented as a collection of features, whose number may vary among objects. Features are described as activation of neural oscillators in different sensory-motor areas (one area for each feature) topographically organized to implement a similarity principle. Lexical items are represented as activation of neural groups in a different layer. Lexical and semantic aspects are then linked together on the basis of previous experience, using physiological learning mechanisms. After training, features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. The model also includes some inhibitory synapses: features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Moreover, different objects and the corresponding words can be simultaneously retrieved and segmented via a time division in the gamma-band. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurring during learning. The model simulates the formation of categories, assuming that objects belong to the same category if they share some features. Simple exempla are shown to illustrate how words representing a category can be distinguished from words representing individual members. Finally, the model can be used to simulate patients with focalized lesions, assuming an impairment of synaptic strength in specific feature areas.

  4. Experimental Validation of Modeled Fe Opacities at Conditions Approaching the Base of the Solar Convection Zone

    Science.gov (United States)

    Nagayama, Taisuke

    2013-10-01

    Knowledge of the Sun is a foundation for other stars. However, after the solar abundance revision in 2005, standard solar models disagree with helioseismic measurements particularly at the solar convection zone base (CZB, r ~ 0 . 7 ×RSun) [Basu, et al., Physics Reports 457, 217 (2008)]. One possible explanation is an underestimate in the Fe opacity at the CZB [Bailey et al., Phys. Plasmas 16, 058101 (2009)]. Modeled opacities are important physics inputs for plasma simulations (e.g. standard solar models). However, modeled opacities are not experimentally validated at high temperatures because of three challenging criteria required for reliable opacity measurements: 1) smooth and strong backlighter, 2) plasma condition uniformity, and 3) simultaneous measurements of plasma condition and transmission. Fe opacity experiments are performed at the Sandia National Laboratories (SNL) Z-machine aiming at conditions close to those at the CZB (i.e. Te = 190 eV, ne = 1 ×1023 cm-3). To verify the quality of the experiments, it is critical to investigate how well the three requirements are satisfied. The smooth and strong backlighter is provided by the SNL Z-pinch dynamic hohlraum. Fe plasma condition is measured by mixing Mg into the Fe sample and employing Mg K-shell line transmission spectroscopy. Also, an experiment is designed and performed to measure the level of non-uniformity in the Fe plasma by mixing Al and Mg dopants on the opposite side of the Fe sample and analyzing their spectra. We will present quantitative results on these investigations as well as the comparison of the measured opacity to modeled opacities. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000.

  5. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  6. Group-wise herding behavior in financial markets: an agent-based modeling approach.

    Science.gov (United States)

    Kim, Minsung; Kim, Minki

    2014-01-01

    In this paper, we shed light on the dynamic characteristics of rational group behaviors and the relationship between monetary policy and economic units in the financial market by using an agent-based model (ABM), the Hurst exponent, and the Shannon entropy. First, an agent-based model is used to analyze the characteristics of the group behaviors at different levels of irrationality. Second, the Hurst exponent is applied to analyze the characteristics of the trend-following irrationality group. Third, the Shannon entropy is used to analyze the randomness and unpredictability of group behavior. We show that in a system that focuses on macro-monetary policy, steep fluctuations occur, meaning that the medium-level irrationality group has the highest Hurst exponent and Shannon entropy among all of the groups. However, in a system that focuses on micro-monetary policy, all group behaviors follow a stable trend, and the medium irrationality group thus remains stable, too. Likewise, in a system that focuses on both micro- and macro-monetary policies, all groups tend to be stable. Consequently, we find that group behavior varies across economic units at each irrationality level for micro- and macro-monetary policy in the financial market. Together, these findings offer key insights into monetary policy.

  7. A Petri-Nets Based Unified Modeling Approach for Zachman Framework Cells

    Science.gov (United States)

    Ostadzadeh, S. Shervin; Nekoui, Mohammad Ali

    With a trend toward becoming more and more information based, enterprises constantly attempt to surpass the accomplishments of each other by improving their information activities. In this respect, Enterprise Architecture (EA) has proven to serve as a fundamental concept to accomplish this goal. Enterprise architecture clearly provides a thorough outline of the whole enterprise applications and systems with their relationships to enterprise business goals. To establish such an outline, a logical framework needs to be laid upon the entire information system called Enterprise Architecture Framework (EAF). Among various proposed EAF, Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have critical roles in enterprise management and development. One of the problems faced in using ZF is the lack of formal and verifiable models for its cells. In this paper, we proposed a formal language based on Petri nets in order to obtain verifiable models for all cells in ZF. The presented method helps developers to validate and verify completely integrated business and IT systems which results in improve the effectiveness or efficiency of the enterprise itself.

  8. Modeling near-barrier collisions of heavy ions based on a Langevin-type approach

    Science.gov (United States)

    Karpov, A. V.; Saiko, V. V.

    2017-08-01

    Background: Multinucleon transfer in low-energy nucleus-nucleus collisions is proposed as a method of production of yet-unknown neutron-rich nuclei hardly reachable by other methods. Purpose: Modeling of dynamics of nuclear reactions induced by heavy ions in their full complexity of competing reaction channels remains to be a challenging task. The work is aimed at development of such a model and its application to the analysis of multinucleon transfer in deep inelastic collisions of heavy ions leading, in particular, to formation of neutron-rich isotopes in the vicinity of the N =126 shell closure. Method: Multidimensional dynamical model of nucleus-nucleus collisions based on the Langevin equations has been proposed. It is combined with a statistical model for simulation of de-excitation of primary reaction fragments. The model provides a continuous description of the system evolution starting from the well-separated target and projectile in the entrance channel of the reaction up to the formation of final reaction products. Results: A rather complete set of experimental data available for reactions 136Xe+198Pt,208Pb,209Bi was analyzed within the developed model. The model parameters have been determined. The calculated energy, mass, charge, and angular distributions of reaction products, their various correlations as well as cross sections for production of specific isotopes agree well with the data. On this basis, optimal experimental conditions for synthesizing the neutron-rich nuclei in the vicinity of the N =126 shell were formulated and the corresponding cross sections were predicted. Conclusions: The production yields of neutron-rich nuclei with N =126 weakly depend on the incident energy. At the same time, the corresponding angular distributions are strongly energy dependent. They are peaked at grazing angles for larger energies and extend up to the forward angles at low near-barrier collision energies. The corresponding cross sections exceed 100 nb for

  9. Group-based trajectory models: a new approach to classifying and predicting long-term medication adherence.

    Science.gov (United States)

    Franklin, Jessica M; Shrank, William H; Pakes, Juliana; Sanfélix-Gimeno, Gabriel; Matlin, Olga S; Brennan, Troyen A; Choudhry, Niteesh K

    2013-09-01

    Classifying medication adherence is important for efficiently targeting adherence improvement interventions. The purpose of this study was to evaluate the use of a novel method, group-based trajectory models, for classifying patients by their long-term adherence. We identified patients who initiated a statin between June 1, 2006 and May 30, 2007 in prescription claims from CVS Caremark and evaluated adherence over the subsequent 15 months. We compared several adherence summary measures, including proportion of days covered (PDC) and trajectory models with 2-6 groups, with the observed adherence pattern, defined by monthly indicators of full adherence (defined as having ≥24 d covered of 30). We also compared the accuracy of adherence prediction based on patient characteristics when adherence was defined by either a trajectory model or PDC. In 264,789 statin initiators, the 6-group trajectory model summarized long-term adherence best (C=0.938), whereas PDC summarized less well (C=0.881). The accuracy of adherence predictions was similar whether adherence was classified by PDC or by trajectory model. Trajectory models summarized adherence patterns better than traditional approaches and were similarly predicted by covariates. Group-based trajectory models may facilitate targeting of interventions and may be useful to adjust for confounding by health-seeking behavior.

  10. Multivariate Autoregressive Model Based Heart Motion Prediction Approach for Beating Heart Surgery

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2013-02-01

    Full Text Available A robotic tool can enable a surgeon to conduct off-pump coronary artery graft bypass surgery on a beating heart. The robotic tool actively alleviates the relative motion between the point of interest (POI on the heart surface and the surgical tool and allows the surgeon to operate as if the heart were stationary. Since the beating heart's motion is relatively high-band, with nonlinear and nonstationary characteristics, it is difficult to follow. Thus, precise beating heart motion prediction is necessary for the tracking control procedure during the surgery. In the research presented here, we first observe that Electrocardiography (ECG signal contains the causal phase information on heart motion and non-stationary heart rate dynamic variations. Then, we investigate the relationship between ECG signal and beating heart motion using Granger Causality Analysis, which describes the feasibility of the improved prediction of heart motion. Next, we propose a nonlinear time-varying multivariate vector autoregressive (MVAR model based adaptive prediction method. In this model, the significant correlation between ECG and heart motion enables the improvement of the prediction of sharp changes in heart motion and the approximation of the motion with sufficient detail. Dual Kalman Filters (DKF estimate the states and parameters of the model, respectively. Last, we evaluate the proposed algorithm through comparative experiments using the two sets of collected vivo data.

  11. PLETS model: a sustainability-concept-based approach to product end-of-life management

    Science.gov (United States)

    Dunmade, Israel

    2004-12-01

    The need for sustainable product end-of-life management technologies is critical in today's globally competitive environment. The ever-increasing environmental consciousness of consumers and strictness in legislative regulations necessitate more prudent product decisions. The ability to make sound decisions on which product end-of-life management technologies to adopt is crucial to achieving sustainability of the product systems. It is essential that effective assessments of these technologies for future investment and applications indicate the total economic, environmental and social impacts of each option as well as the trade-offs between the various product end-of-life management technologies. The tendency in modeling this decision scenario is to base the formulation and the analysis on crisp, deterministic, and precise data. The product end-of-life management decision environment is however characterized by a mix of crisp and linguistically expressed parameters, most of which are uncertain in nature. Furthermore, the decision makers are interested in selecting an option that both satisfies certain minimum requirements and maximize their utility from a set of feasible alternatives. The goal of this study therefore is to develop a simple, efficient procedure that provides the manufacturing and allied industry with the ability to assess and evaluate the sustainability of remanufacturing and related technologies based on lifecycle thinking. This methodology, termed "product lifecycle extension techniques selection (PLETS) model," is a hybrid of fuzzy logic and a number of multi-attribute decision making models. It can be used to determine the remanufacturability of each product. In addition, it can also be employed to compare the economic, environmental and social sustainability of the feasible set of the product end-of-life management technologies being considered. The proposed methodology is illustrated with an example of end-of-life management for a peanut

  12. A Novel Approach to Testing for Average Bioequivalence Based on Modeling the Within-Period Dependence Structure.

    Science.gov (United States)

    Chandrasekhar, Rameela; Shi, Yi; Hutson, Alan D; Wilding, Gregory E

    2015-01-01

    Bioequivalence trials are commonly conducted to assess therapeutic equivalence between a generic and an innovator brand formulations. In such trials, drug concentrations are obtained repeatedly over time and are summarized using a metric such as the area under the concentration vs. time curve (AUC) for each subject. The usual practice is to then conduct two one-sided tests using these areas to evaluate for average bioequivalence. A major disadvantage of this approach is the loss of information encountered when ignoring the correlation structure between repeated measurements in the computation of areas. In this article, we propose a general linear model approach that incorporates the within-subject covariance structure for making inferences on mean areas. The model-based method can be seen to arise naturally from the reparameterization of the AUC as a linear combination of outcome means. We investigate and compare the inferential properties of our proposed method with the traditional two one-sided tests approach using Monte Carlo simulation studies. We also examine the properties of the method in the event of missing data. Simulations show that the proposed approach is a cost-effective, viable alternative to the traditional method with superior inferential properties. Inferential advantages are particularly apparent in the presence of missing data. To illustrate our approach, a real working example from an asthma study is utilized.

  13. Linking Resource-Based Strategies to Customer-Focused Performance for Professional Services: A Structural Equation Modelling Approach

    Directory of Open Access Journals (Sweden)

    Ming-Lu Wu

    2013-12-01

    Full Text Available This paper links professional service firms’ resource-based strategies to their customer-focused performance for formulating service quality improvement priorities. The research applies the structural equation modelling approach to survey data from Hong Kong construction consultants to test some hypotheses. The study validates the various measures of firms’ resource-based strategies and customer-focused performance and bridges the gaps in firms’ organizational learning, core competences and customer-focused performance mediated by their strategic flexibility. The research results have practical implications for professional service firms to deploy resources appropriately to first enhance different competences and then improve customerfocused performance using their different competences.

  14. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    Science.gov (United States)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  15. Identifying overrepresented concepts in gene lists from literature: a statistical approach based on Poisson mixture model

    Directory of Open Access Journals (Sweden)

    Zhai Chengxiang

    2010-05-01

    Full Text Available Abstract Background Large-scale genomic studies often identify large gene lists, for example, the genes sharing the same expression patterns. The interpretation of these gene lists is generally achieved by extracting concepts overrepresented in the gene lists. This analysis often depends on manual annotation of genes based on controlled vocabularies, in particular, Gene Ontology (GO. However, the annotation of genes is a labor-intensive process; and the vocabularies are generally incomplete, leaving some important biological domains inadequately covered. Results We propose a statistical method that uses the primary literature, i.e. free-text, as the source to perform overrepresentation analysis. The method is based on a statistical framework of mixture model and addresses the methodological flaws in several existing programs. We implemented this method within a literature mining system, BeeSpace, taking advantage of its analysis environment and added features that facilitate the interactive analysis of gene sets. Through experimentation with several datasets, we showed that our program can effectively summarize the important conceptual themes of large gene sets, even when traditional GO-based analysis does not yield informative results. Conclusions We conclude that the current work will provide biologists with a tool that effectively complements the existing ones for overrepresentation analysis from genomic experiments. Our program, Genelist Analyzer, is freely available at: http://workerbee.igb.uiuc.edu:8080/BeeSpace/Search.jsp

  16. Tracking facial features in video sequences using a deformable-model-based approach

    Science.gov (United States)

    Malciu, Marius; Preteux, Francoise J.

    2000-10-01

    This paper addresses the issue of computer vision-based face motion capture as an alternative to physical sensor-based technologies. The proposed method combines a deformable template-based tracking of mouth and eyes in arbitrary video sequences with a single speaking person with a global 3D head pose estimation procedure yielding robust initializations. Mathematical principles underlying deformable template matching together with definition and extraction of salient image features are presented. Specifically, interpolating cubic B-splines between the MPEG-4 Face Animation Parameters (FAPs) associated with the mouth and eyes are used as template parameterization. Modeling the template a network of springs interconnecting with the mouth and eyes FAPs, the internal energy is expressed as a combination of elastic and symmetry local constraints. The external energy function, which allows to enforce interactions with image data, involves contour, texture and topography properties properly combined within robust potential functions. Template matching is achieved by applying the downhill simplex method for minimizing the global energy cost. Stability and accuracy of the results are discussed on a set of 2000 frames corresponding to 5 video sequences of speaking people.

  17. Optimizing water resources management in large river basins with integrated surface water-groundwater modeling: A surrogate-based approach

    Science.gov (United States)

    Wu, Bin; Zheng, Yi; Wu, Xin; Tian, Yong; Han, Feng; Liu, Jie; Zheng, Chunmiao

    2015-04-01

    Integrated surface water-groundwater modeling can provide a comprehensive and coherent understanding on basin-scale water cycle, but its high computational cost has impeded its application in real-world management. This study developed a new surrogate-based approach, SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), to incorporate the integrated modeling into water management optimization. Its applicability and advantages were evaluated and validated through an optimization research on the conjunctive use of surface water (SW) and groundwater (GW) for irrigation in a semiarid region in northwest China. GSFLOW, an integrated SW-GW model developed by USGS, was employed. The study results show that, due to the strong and complicated SW-GW interactions, basin-scale water saving could be achieved by spatially optimizing the ratios of groundwater use in different irrigation districts. The water-saving potential essentially stems from the reduction of nonbeneficial evapotranspiration from the aqueduct system and shallow groundwater, and its magnitude largely depends on both water management schemes and hydrological conditions. Important implications for water resources management in general include: first, environmental flow regulation needs to take into account interannual variation of hydrological conditions, as well as spatial complexity of SW-GW interactions; and second, to resolve water use conflicts between upper stream and lower stream, a system approach is highly desired to reflect ecological, economic, and social concerns in water management decisions. Overall, this study highlights that surrogate-based approaches like SOIM represent a promising solution to filling the gap between complex environmental modeling and real-world management decision-making.

  18. Impacts of radiation exposure on the experimental microbial ecosystem: a particle-based model simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Doi, M.; Tanaka, N.; Fuma, S.; Kawabata, Z.

    2004-07-01

    Well-designed experimental model ecosystem could be a simple reference of the actual environment and complex ecological systems. For ecological toxicity test of radiation and other environmental toxicants, we investigated and aquatic microbial ecosystem (closed microcosm) in the test tube with initial substrates,autotroph flagellate algae (Euglena, G.), heterotroph ciliate protozoa (Tetrahymena T.) and saprotroph bacteria (E, coli). These species organizes by itself to construct the ecological system, that keeps the sustainable population dynamics for more than 2 years after inoculation only by adding light diurnally and controlling temperature at 25 degree Celsius. Objective of the study is to develop the particle-based computer simulation by reviewing interactions among microbes and environment, and analyze the ecological toxicities of radiation on the microcosm by replicating experimental results in the computer simulation. (Author) 14 refs.

  19. A comparative study of independent particle model based approaches for thermal averages

    Indian Academy of Sciences (India)

    Subrata Banik; Tapta Kanchan Roy; M Durga Prasad

    2013-09-01

    A comparative study is done on thermal average calculation by using the state specific vibrational self-consistent field method (ss-VSCF), the virtual vibrational self-consistent field (v-VSCF) method and the thermal self-consistent field (t-SCF) method. The different thermodynamic properties and expectation values are calculated using these three methods and the results are compared with full configuration interaction method (FVCI). We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives better accuracy compared to the v-VSCF counterpart especially at higher temperatures.

  20. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  1. The role of chromosome missegregation in cancer development: a theoretical approach using agent-based modelling.

    Directory of Open Access Journals (Sweden)

    Arturo Araujo

    Full Text Available Many cancers are aneuploid. However, the precise role that chromosomal instability plays in the development of cancer and in the response of tumours to treatment is still hotly debated. Here, to explore this question from a theoretical standpoint we have developed an agent-based model of tissue homeostasis in which to test the likely effects of whole chromosome mis-segregation during cancer development. In stochastic simulations, chromosome mis-segregation events at cell division lead to the generation of a diverse population of aneuploid clones that over time exhibit hyperplastic growth. Significantly, the course of cancer evolution depends on genetic linkage, as the structure of chromosomes lost or gained through mis-segregation events and the level of genetic instability function in tandem to determine the trajectory of cancer evolution. As a result, simulated cancers differ in their level of genetic stability and in their growth rates. We used this system to investigate the consequences of these differences in tumour heterogeneity for anti-cancer therapies based on surgery and anti-mitotic drugs that selectively target proliferating cells. As expected, simulated treatments induce a transient delay in tumour growth, and reveal a significant difference in the efficacy of different therapy regimes in treating genetically stable and unstable tumours. These data support clinical observations in which a poor prognosis is correlated with a high level of chromosome mis-segregation. However, stochastic simulations run in parallel also exhibit a wide range of behaviours, and the response of individual simulations (equivalent to single tumours to anti-cancer therapy prove extremely variable. The model therefore highlights the difficulties of predicting the outcome of a given anti-cancer treatment, even in cases in which it is possible to determine the genotype of the entire set of cells within the developing tumour.

  2. Predictive Modeling of Antioxidant Coumarin Derivatives Using Multiple Approaches: Descriptor-Based QSAR, 3D-Pharmacophore Mapping, and HQSAR

    Directory of Open Access Journals (Sweden)

    Indrani MITRA

    2016-09-01

    Full Text Available The inability of the systemic antioxidants to alleviate the exacerbation of free radical formation from metabolic outputs and environmental pollutants claims an urgent demand for the identification and design of new chemical entities with potent antioxidant activity. In the present work, different QSAR approaches have been utilized for identifying the essential structural attributes imparting a potential antioxidant activity profile of the coumarin derivatives. The descriptor-based QSAR model provides a quantitative outline regarding the structural prerequisites of the molecules, while 3D pharmacophore and HQSAR models emphasize the favourable spatial arrangement of the various chemical features and the crucial molecular fragments, respectively. All the models infer that the fused benzene ring and the oxygen atom of the pyran ring constituting the parent coumarin nucleus capture the prime pharmacophoric features, imparting superior antioxidant activity to the molecules. The developed models may serve as indispensable query tools for screening untested molecules belonging to the class of coumarin derivatives.

  3. A multicriteria model for ranking of improvement approaches in construction companies based on the PROMETHÉE II method

    Directory of Open Access Journals (Sweden)

    Renata Maciel de Melo

    2015-03-01

    Full Text Available The quality of the construction production process may be improved using several different methods such as Lean Construction, ISO 9001, ISO 14001 or ISO 18001. Construction companies need a preliminary study and systematic implementation of changes to become more competitive and efficient. This paper presents a multicriteria decision model for the selection and ranking of such alternatives for improvement approaches regarding the aspects of quality, sustainability and safety, based on the PROMETHEE II method. The adoption of this model provides more confidence and visibility for decision makers. One of the differentiators of this model is the use of a fragmented set of improvement alternatives. These alternatives were combined with some restrictions to create a global set of alternatives. An application to three scenarios, considering realistic data, was developed. The results of the application show that the model should be incorporated into the strategic planning process of organizations.

  4. Provider dismissal policies and clustering of vaccine-hesitant families: an agent-based modeling approach.

    Science.gov (United States)

    Buttenheim, Alison M; Cherng, Sarah T; Asch, David A

    2013-08-01

    Many pediatric practices have adopted vaccine policies that require parents who refuse to vaccinate according to the ACIP schedule to find another health care provider. Such policies may inadvertently cluster unvaccinated patients into practices that tolerate non vaccination or alternative schedules, turning them into risky pockets of low herd immunity. The objective of this study was to assess the effect of provider zero-tolerance vaccination policies on the clustering of intentionally unvaccinated children. We developed an agent-based model of parental vaccine hesitancy, provider non-vaccination tolerance, and selection of patients into pediatric practices. We ran 84 experiments across a range of parental hesitancy and provider tolerance scenarios. When the model is initialized, all providers accommodate refusals and intentionally unvaccinated children are evenly distributed across providers. As provider tolerance decreases, hesitant children become more clustered in a smaller number of practices and eventually are not able to find a practice that will accept them. Each of these effects becomes more pronounced as the level of hesitancy in the population rises. Heterogeneity in practice tolerance to vaccine-hesitant parents has the unintended result of concentrating susceptible individuals within a small number of tolerant practices, while providing little if any compensatory protection to adherent individuals. These externalities suggest an agenda for stricter policy regulation of individual practice decisions.

  5. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Shuhei Kaneko

    2015-01-01

    Full Text Available In the past decade, researchers in oncology have sought to develop survival prediction models using gene expression data. The least absolute shrinkage and selection operator (lasso has been widely used to select genes that truly correlated with a patient’s survival. The lasso selects genes for prediction by shrinking a large number of coefficients of the candidate genes towards zero based on a tuning parameter that is often determined by a cross-validation (CV. However, this method can pass over (or fail to identify true positive genes (i.e., it identifies false negatives in certain instances, because the lasso tends to favor the development of a simple prediction model. Here, we attempt to monitor the identification of false negatives by developing a method for estimating the number of true positive (TP genes for a series of values of a tuning parameter that assumes a mixture distribution for the lasso estimates. Using our developed method, we performed a simulation study to examine its precision in estimating the number of TP genes. Additionally, we applied our method to a real gene expression dataset and found that it was able to identify genes correlated with survival that a CV method was unable to detect.

  6. Evidence-based management of otitis media: a 5S model approach.

    Science.gov (United States)

    Wasson, J D; Yung, M W

    2015-02-01

    The 5S model proposes five hierarchical levels (systems, summaries, synopses, syntheses and studies) of pre-appraised evidence to guide evidence-based practice. This review aimed to identify and summarise pre-appraised evidence at the highest available 5S level for the management of different subsets of otitis media: acute otitis media, otitis media with effusion, chronic suppurative otitis media and cholesteatoma in both adults and children. Data sources were pre-appraised evidence resources. Evidence freely available from sources at the highest available level of the 5S model were summarised for this review. System level evidence exists for acute otitis media and otitis media with effusion. Summary level evidence exists for recurrent acute otitis media and medical management of chronic suppurative otitis media. There is an absence of randomised controlled trials to prove the efficacy of surgical management of chronic suppurative otitis media and cholesteatoma. Until randomised controlled trial data are generated, consensus publications on the surgical management of chronic suppurative otitis media and cholesteatoma should be used to guide best practice.

  7. An optimization approach to cycle quality network chain based on improved SCOR model

    Institute of Scientific and Technical Information of China (English)

    Renbin Xiao; Zhengying Cai; Xinhui Zhang

    2009-01-01

    Based on the improved supply chain operations reference (SCOR) model, a network-topology structure of cycle quality chain oper-ations reference (CQCOR) model is built up, which realizes the cycle operation by an added quality process of reverse manufacturing. The concept of cycle quality chain management is defined, and its cost structure is analyzed according to positive and reverse quality processes. If the quality level is controlled by the positive quality cost, then the reverse quality cost is a nonlinear function of quality level. All the quality processes are connected by acceptable probability, so the optimized objective function is described as a fuzzy multi-objective function comprising maximum of the total profit of quality chain, maximum of the recycling efficiency and maximum of environment protection and source saving. The effects of different quality policies on fuzzy rules are compared by a simplified example. When the policy of recycling efficiency dominates, the total quality profit will be less than that of maximum profit policy.

  8. A Model Based Ideotyping Approach for Wheat Under Different Environmental Conditions in North China Plain

    Institute of Scientific and Technical Information of China (English)

    Markus Herndl; SHAN Cheng-gang; WANG Pu; Simone Graeff; Wilhelm Claupein

    2007-01-01

    Before starting a breeding program for a specific crop or variety,it Call be helpful to know how traits behave in determining yield under different conditions and environments.Crop growth models can be used to generate valuable information on the relevance of specific traits for an environment of interest.In this paper,the simulation model CMS-Cropsim-CERESWheat was used to test the performance of input parameters which describe cultivar differences concerning plant development and grain yield.In so-called ideotyping sequences,the specific eultivar parameters were varied and the model was run with the same management information in four different scenarios.The scenarios consisted of two locations,Wuqiao(37.3°N,116.3°E)and Quzhou(36.5°N,115°E)in Hebei Province(North China Plain),and a dry and a wet growing season for each location.The input parameter G1(corresponding trait:kernel number per spike)followed by G2 (corresponding trait:kernel weight)had the biggest influence on yield over all scenarios.The input parameters P1V (corresponding trait:vernalization requirement)and P1D(corresponding trait:photoperiod response)also played an important role in determining yield.In the dry scenarios a low response in vernalization and photoperiod generated a higher yield compared to a high response.The lower responses caused earliness and the period of late water stress was avoided.The last relevant parameter that affected yield was PHINT(corresponding trait:leaf area of first leaf).Thesimulation showed that with an increasing PHINT.yield was enhanced over all scenarios.Based on the results obtained in this study,plant breeders could carefully select the relevant traits and integrate them in their breeding program for a specific region.

  9. Hydrological modelling over different scales on the edge of the permafrost zone: approaching model realism based on experimentalists' knowledge

    Science.gov (United States)

    Nesterova, Natalia; Makarieva, Olga; Lebedeva, Lyudmila

    2017-04-01

    Quantitative and qualitative experimentalists' data helps to advance both understanding of the runoff generation and modelling strategies. There is significant lack of such information for the dynamic and vulnerable cold regions. The aim of the study is to make use of historically collected experimental hydrological data for modelling poorly-gauged river basins on larger scales near the southern margin of the permafrost zone in Eastern Siberia. Experimental study site "Mogot" includes the Nelka river (30.8 km2) and its three tributaries with watersheds area from 2 to 5.8 km2. It is located in the upper elevated (500 - 1500 m a.s.l.) part of the Amur River basin. Mean annual temperature and precipitation are -7.5°C and 555 mm respectively. Top of the mountains with weak vegetation has well drained soil that prevents any water accumulation. Larch forest on the northern slopes has thick organic layer. It causes shallow active layer and relatively small subsurface water storage. Soil in the southern slopes has thinner organic layer and thaws up to 1.6 m depth. Flood plains are the wettest landscape with highest water storage capacity. Measured monthly evaporation varies from 9 to 100 mm through the year. Experimental data shows importance of air temperature and precipitation changes with the elevation. Their gradient was taken into account for hydrological simulations. Model parameterization was developed according to available quantitative and qualitative data in the Mogot station. The process-based hydrological Hydrograph model was used in the study. It explicitly describes hydrological processes in different permafrost environments. Flexibility of the Hydrograph model allows take advantage from the experimental data for model set-up. The model uses basic meteorological data as input. The level of model complexity is suitable for a remote, sparsely gauged region such as Southern Siberia as it allows for a priori assessment of the model parameters. Model simulation

  10. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    Energy Technology Data Exchange (ETDEWEB)

    Ramazani, A., E-mail: ali.ramazani@iehk.rwth-aachen.de [Department of Ferrous Metallurgy, RWTH Aachen University, Intzestr.1, D-52072 Aachen (Germany); Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W. [Department of Ferrous Metallurgy, RWTH Aachen University, Intzestr.1, D-52072 Aachen (Germany)

    2013-01-10

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor {sigma}{sub 3D}/{sigma}{sub 2D} has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  11. A cellular automaton based model simulating HVAC fluid and heat transport in a building. Modeling approach and comparison with experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Saiz, A. [Department of Applied Mathematics, Polytechnic University of Valencia, ETSGE School, Camino de Vera s/n, 46022 Valencia (Spain); Urchueguia, J.F. [Department of Applied Physics, Polytechnic University of Valencia, ETSII School, Camino de Vera s/n, 46022 Valencia (Spain); Martos, J. [Superior Technical School of Engineering, Department of Electronic Engineering, University of Valencia, Vicente Andres Estelles s/n, Burjassot 46100, Valencia (Spain)

    2010-09-15

    A discrete model characterizing heat and fluid flow in connection with thermal fluxes in a building is described and tested against experiment in this contribution. The model, based on a cellular automaton approach, relies on a set of a few quite simple rules and parameters in order to simulate the dynamic evolution of temperatures and energy flows in any water or brine based thermal energy distribution network in a building or system. Using an easy-to-record input, such as the instantaneous electrical power demand of the heating or cooling system, our model predicts time varying temperatures in characteristic spots and the related enthalpy flows whose simulation usually requires heavy computational tools and detailed knowledge of the network elements. As a particular example, we have applied our model to simulate an existing fan coil based hydronic heating system driven by a geothermal heat pump. When compared to the experimental temperature and thermal energy records, the outcome of the model coincides. (author)

  12. Multi-objective Modeling and Assessment of Partition Properties: A GA-Based Quantitative Structure-Property Relationship Approach

    Institute of Scientific and Technical Information of China (English)

    印春生; 刘新会; 郭卫民; 刘树深; 韩朔暌; 王连生

    2003-01-01

    In this work a multi-objective quantitative structure-property relationship (QSPR) analysis approach was reported based on the study on three partition properties of 50 aromatic sulfur-containing carboxylates. Here multi-objectives ( properties )were taken as a vector for QSPR modeling. The quantitative correlations for partition properties were developed using a ge-netic algorithm-based variable-selection approach with quantum descriptors, derived from AM1-based calculations.With the QSPR models, the aqueous solubmty, octanol/water partition coefficients and reversed-phase HPLC capacity factors of sulfur-contalning compounds were estimated and predicted.Using GA-based multivariate linear regression with cross-vali-dation procedure, a set of the most promising descriptors was selegted from a pool of 28 quantum chemical semi-empirical de-scriptors, incloding steric and electronic types, to integrally build QSPR models. The selected molecular descriptors includ-ed the net charges on carboxyl group (Qoc), the 2nd power of net ehnrges on nitrogen atoms (QN2), the net atomic charge on the sulfur atoms (Qs), the van der Waals volume of molecule (V), the most positive net atomic charge on hydrogen atoms(QH) and the measure of polarity and polarizability (π),which were main factors affecting the distribution processes of the compounds under study. The statistically best QSPR models of six descriptors were simultaneously obtained by GA-based linear regression analysis. With the selected descriptors and the QSPR equations, mechanisms of partition action of the Sulfur-containing carboxylates were able to be investigated and inter-preted.

  13. A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture.

    Science.gov (United States)

    Wong, H S; Guan, L

    2001-01-01

    We address the problem of adaptive regularization in image restoration by adopting a neural-network learning approach. Instead of explicitly specifying the local regularization parameter values, they are regarded as network weights which are then modified through the supply of appropriate training examples. The desired response of the network is in the form of a gray level value estimate of the current pixel using weighted order statistic (WOS) filter. However, instead of replacing the previous value with this estimate, this is used to modify the network weights, or equivalently, the regularization parameters such that the restored gray level value produced by the network is closer to this desired response. In this way, the single WOS estimation scheme can allow appropriate parameter values to emerge under different noise conditions, rather than requiring their explicit selection in each occasion. In addition, we also consider the separate regularization of edges and textures due to their different noise masking capabilities. This in turn requires discriminating between these two feature types. Due to the inability of conventional local variance measures to distinguish these two high variance features, we propose the new edge-texture characterization (ETC) measure which performs this discrimination based on a scalar value only. This is then incorporated into a fuzzified form of the previous neural network which determines the degree of membership of each high variance pixel in two fuzzy sets, the EDGE and TEXTURE fuzzy sets, from the local ETC value, and then evaluates the appropriate regularization parameter by appropriately combining these two membership function values.

  14. Nonlinear Steady-State Model Based Gas Turbine Health Status Estimation Approach with Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Yulong Ying

    2015-01-01

    Full Text Available In the lifespan of a gas turbine engine, abrupt faults and performance degradation of its gas-path components may happen; however the performance degradation is not easily foreseeable when the level of degradation is small. Gas path analysis (GPA method has been widely applied to monitor gas turbine engine health status as it can easily obtain the magnitudes of the detected component faults. However, when the number of components within engine is large or/and the measurement noise level is high, the smearing effect may be strong and the degraded components may not be recognized. In order to improve diagnostic effect, a nonlinear steady-state model based gas turbine health status estimation approach with improved particle swarm optimization algorithm (PSO-GPA has been proposed in this study. The proposed approach has been tested in ten test cases where the degradation of a model three-shaft marine engine has been analyzed. These case studies have shown that the approach can accurately search and isolate the degraded components and further quantify the degradation for major gas-path components. Compared with the typical GPA method, the approach has shown better measurement noise immunity and diagnostic accuracy.

  15. Fractional Partial Differential Equation: Fractional Total Variation and Fractional Steepest Descent Approach-Based Multiscale Denoising Model for Texture Image

    Directory of Open Access Journals (Sweden)

    Yi-Fei Pu

    2013-01-01

    Full Text Available The traditional integer-order partial differential equation-based image denoising approaches often blur the edge and complex texture detail; thus, their denoising effects for texture image are not very good. To solve the problem, a fractional partial differential equation-based denoising model for texture image is proposed, which applies a novel mathematical method—fractional calculus to image processing from the view of system evolution. We know from previous studies that fractional-order calculus has some unique properties comparing to integer-order differential calculus that it can nonlinearly enhance complex texture detail during the digital image processing. The goal of the proposed model is to overcome the problems mentioned above by using the properties of fractional differential calculus. It extended traditional integer-order equation to a fractional order and proposed the fractional Green’s formula and the fractional Euler-Lagrange formula for two-dimensional image processing, and then a fractional partial differential equation based denoising model was proposed. The experimental results prove that the abilities of the proposed denoising model to preserve the high-frequency edge and complex texture information are obviously superior to those of traditional integral based algorithms, especially for texture detail rich images.

  16. Multi-agent-based modeling for extracting relevant association rules using a multi-criteria analysis approach

    Directory of Open Access Journals (Sweden)

    Addi Ait-Mlouk

    2016-06-01

    Full Text Available Abstract Recently, association rule mining plays a vital role in knowledge discovery in database. In fact, in most cases, the real datasets lead to a very large number of rules, which do not allow users to make their own selection of the most relevant. The difficult task is mining useful and non-redundant rules. Several approaches have been proposed, such as rule clustering, informative cover method and quality measurements. Another way to selecting relevant association rules, we believe that it is necessary to integrate a decisional approach within the knowledge discovery process. Therefore, in this paper, we propose an approach to discover a category of relevant association rules based on multi-criteria analysis. In other side, the general process of association rules extraction becomes more and more complex, to solve such problem, we also proposed a multi-agent system for modeling the different process of our proposed approach. Therefore, we conclude our work by an empirical study applied to a set of banking data to illustrate the performance of our approach.

  17. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    by a rooted, directed graph where each node without successor is an alternative. We formulate a family of MEV models as dynamic discrete choice models on graphs of correlation structures and show that the dynamic models are consistent with MEV theory and generalize the network MEV model (Daly and Bierlaire...

  18. A geomorphology-based approach for digital elevation model fusion - case study in Danang city, Vietnam

    Science.gov (United States)

    Tran, T. A.; Raghavan, V.; Masumoto, S.; Vinayaraj, P.; Yonezawa, G.

    2014-07-01

    Global digital elevation models (DEM) are considered a source of vital spatial information and find wide use in several applications. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) and Shuttle Radar Topographic Mission (SRTM) DEM offer almost global coverage and provide elevation data for geospatial analysis. However, GDEM and SRTM still contain some height errors that affect the quality of elevation data significantly. This study aims to examine methods to improve the resolution as well as accuracy of available free DEMs by data fusion techniques and evaluating the results with a high-quality reference DEM. The DEM fusion method is based on the accuracy assessment of each global DEM and geomorphological characteristics of the study area. Land cover units were also considered to correct the elevation of GDEM and SRTM with respect to the bare-earth surface. The weighted averaging method was used to fuse the input DEMs based on a landform classification map. According to the landform types, the different weights were used for GDEM and SRTM. Finally, a denoising algorithm (Sun et al., 2007) was applied to filter the output-fused DEM. This fused DEM shows excellent correlation to the reference DEM, having a correlation coefficient R2 = 0.9986, and the accuracy was also improved from a root mean square error (RMSE) of 14.9 m in GDEM and 14.8 m in SRTM to 11.6 m in the fused DEM. The results of terrain-related parameters extracted from this fused DEM such as slope, curvature, terrain roughness index and normal vector of topographic surface are also very comparable to reference data.

  19. Model estimation of cerebral hemodynamics between blood flow and volume changes: a data-based modeling approach.

    Science.gov (United States)

    Wei, Hua-Liang; Zheng, Ying; Pan, Yi; Coca, Daniel; Li, Liang-Min; Mayhew, J E W; Billings, Stephen A

    2009-06-01

    It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

  20. A Knowledge Based Approach for Automated Modelling of Extended Wing Structures in Preliminary Aircraft Design

    OpenAIRE

    Dorbath, Felix; Nagel, Björn; Gollnick, Volker

    2011-01-01

    This paper introduces the concept of the ELWIS model generator for Finite Element models of aircraft wing structures. The physical modelling of the structure is extended beyond the wing primary structures, to increase the level of accuracy for aircraft which diverge from existing configurations. Also the impact of novel high lift technologies on structural masses can be captured already in the early stages of design by using the ELWIS models. The ELWIS model generator is able to c...

  1. DEVELOPMENT OF A GIS DATA MODEL WITH SPATIAL,TEMPORAL AND ATTRIBUTE COMPONENTS BASED ON OBJECT-ORIENTED APPROACH

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper presents a conceptual data model, the STA-model, for handling spatial, temporal and attribute aspects of objects in GIS. The model is developed on the basis of object-oriented modeling approach. This model includes two major parts: (a) modeling the signal objects by STA-object elements, and (b) modeling relationships between STA-objects. As an example, the STA-model is applied for modeling land cover change data with spatial, temporal and attribute components.

  2. 3D modeling of forces between magnet and HTS in a levitation system using new approach of the control volume method based on an unstructured grid

    Energy Technology Data Exchange (ETDEWEB)

    Alloui, L., E-mail: lotfi.alloui@lgep.supelec.fr [Laboratoire de Genie Electrique de Paris - LGEP, CNRS UMR 8507, Supelec, Universite Pierre et Marie Curie-Paris 6, Universite Paris Sud-Paris 11, Plateau de Moulon, 11 rue Joliot Curie, 91192 Gif-Sur-Yvette Cedex (France); Laboratoire de modelisation des systemes energetiques (LMSE), Universite de Biskra, 07000 Biskra (Algeria); Bouillault, F., E-mail: bouillault@lgep.supelec.fr [Laboratoire de Genie Electrique de Paris - LGEP, CNRS UMR 8507, Supelec, Universite Pierre et Marie Curie-Paris 6, Universite Paris Sud-Paris 11, Plateau de Moulon, 11 rue Joliot Curie, 91192 Gif-Sur-Yvette Cedex (France); Bernard, L., E-mail: laurent.bernardl@lgep.supelc.fr [Laboratoire de Genie Electrique de Paris - LGEP, CNRS UMR 8507, Supelec, Universite Pierre et Marie Curie-Paris 6, Universite Paris Sud-Paris 11, Plateau de Moulon, 11 rue Joliot Curie, 91192 Gif-Sur-Yvette Cedex (France); Leveque, J., E-mail: jean.leveque@green.uhp-nancy.fr [Groupe de recherche en electronique et electrotechnique de Nancy, Universite Henry Poincare, BP 239, 54506 Vandoeuvre les Nancy (France)

    2012-05-15

    In this paper we present new 3D numerical model to calculate the vertical and the guidance forces in high temperature superconductors taking into account the influence of the flux creep phenomena. In the suggested numerical model, we adopt a new approach of the control volume method. This approach is based on the use of an unstructured grid which can be used to model more complex geometries. A comparison of the control volume method results with experiments verifies the validity of this approach and the proposed numerical model. Based on this model, the levitation force's relaxation at different temperatures was also studied.

  3. The influence of mapped hazards on risk beliefs: a proximity-based modeling approach.

    Science.gov (United States)

    Severtson, Dolores J; Burt, James E

    2012-02-01

    Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated "you live here" location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, for example, distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples.

  4. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  5. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHVp) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Estimating present climate in a warming world: a model-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Raeisaenen, J.; Ruokolainen, L. [University of Helsinki (Finland). Division of Atmospheric Sciences and Geophysics

    2008-09-30

    Weather services base their operational definitions of 'present' climate on past observations, using a 30-year normal period such as 1961-1990 or 1971-2000. In a world with ongoing global warming, however, past data give a biased estimate of the actual present-day climate. Here we propose to correct this bias with a 'delta change' method, in which model-simulated climate changes and observed global mean temperature changes are used to extrapolate past observations forward in time, to make them representative of present or future climate conditions. In a hindcast test for the years 1991-2002, the method works well for temperature, with a clear improvement in verification statistics compared to the case in which the hindcast is formed directly from the observations for 1961-1990. However, no improvement is found for precipitation, for which the signal-to-noise ratio between expected anthropogenic changes and interannual variability is much lower than for temperature. An application of the method to the present (around the year 2007) climate suggests that, as a geographical average over land areas excluding Antarctica, 8-9 months per year and 8-9 years per decade can be expected to be warmer than the median for 1971-2000. Along with the overall warming, a substantial increase in the frequency of warm extremes at the expense of cold extremes of monthly-to-annual temperature is expected.

  7. An individual-based modeling approach to spawning-potential per-recruit models: An application to blue crab (Callinectes sapidus) in Chesapeake Bay

    Science.gov (United States)

    Bunnell, D.B.; Miller, T.J.

    2005-01-01

    An individual-based modeling approach to estimate biological reference points for blue crabs (Callinectes sapidus) in Chesapeake Bay offered several advantages over conventional models: (i) known individual variation in size and growth rate could be incorporated, (ii) the underlying discontinuous growth pattern could be simulated, and (iii) the complexity of the fishery, where vulnerability is based on size, shell status (e.g., soft, hard), maturity, and sex could be accommodated. Across a range of natural mortality (M) scenarios (0.375-1.2??year-1), we determined the exploitation fraction (??) and fishing mortality (F) that protected 20% of the spawning potential of an unfished population, the current target. As M increased, ??20% and F-20% decreased. Assuming that M = 0.9??year-1, our models estimated ??20% = 0.45, which is greater than field-based estimates of ?? in 64% of the years since 1990. Hence, the commercial fishery has likely contributed to the recent population decline in Chesapeake Bay. Comparisons of our results with conventional per-recruit approaches indicated that incorporating the complexity of the fishery was the most important advantage in our individual-based modeling approach. ?? 2005 NRC.

  8. A novel approach to equipment health management based on auto-regressive hidden semi-Markov model (AR-HSMM)

    Institute of Scientific and Technical Information of China (English)

    DONG Ming

    2008-01-01

    As a new maintenance method, CBM (condition based maintenance) is becoming more and more important for the health management of complicated and costly equipment. A prerequisite to widespread deployment of CBM technology and prac-tice in industry is effective diagnostics and prognostics. Recently, a pattern recog-nition technique called HMM (hidden Markov model) was widely used in many fields. However, due to some unrealistic assumptions, diagnositic results from HMM were not so good, and it was difficult to use HMM directly for prognosis. By relaxing the unrealistic assumptions in HMM, this paper presents a novel approach to equip-ment health management based on auto-regressive hidden semi-Markov model (AR-HSMM). Compared with HMM, AR-HSMM has three advantages: 1)It allows explicitly modeling the time duration of the hidden states and therefore is capable of prognosis. 2) It can relax observations' independence assumption by accom-modating a link between consecutive observations. 3) It does not follow the unre-alistic Markov chain's memoryless assumption and therefore provides more pow-erful modeling and analysis capability for real problems. To facilitate the computation in the proposed AR-HSMM-based diagnostics and prognostics, new forwardbackward variables are defined and a modified forward-backward algorithm is developed. The evaluation of the proposed methodology was carried out through a real world application case study: health diagnosis and prognosis of hydraulic pumps in Caterpillar Inc. The testing results show that the proposed new approach based on AR-HSMM is effective and can provide useful support for the decision-making in equipment health management.

  9. Modelling Temporal Schedule of Urban Trains Using Agent-Based Simulation and NSGA2-BASED Multiobjective Optimization Approaches

    Science.gov (United States)

    Sahelgozin, M.; Alimohammadi, A.

    2015-12-01

    Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO) problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA) that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  10. MODELLING TEMPORAL SCHEDULE OF URBAN TRAINS USING AGENT-BASED SIMULATION AND NSGA2-BASED MULTIOBJECTIVE OPTIMIZATION APPROACHES

    Directory of Open Access Journals (Sweden)

    M. Sahelgozin

    2015-12-01

    Full Text Available Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  11. Dynamic modeling of wave driven unmanned surface vehicle in longitudinal profile based on D-H approach

    Institute of Scientific and Technical Information of China (English)

    田宝强; 俞建成; 张艾群

    2015-01-01

    Wave driven unmanned surface vehicle (WUSV) is a new concept ocean robot drived by wave energy and solar energy, and it is very suitable for the vast ocean observations with incomparable endurance. Its dynamic modeling is very important because it is the theoretical foundation for further study in the WUSV motion control and efficiency analysis. In this work, the multibody system of WUSV was described based on D-H approach. Then, the driving principle was analyzed and the dynamic model of WUSV in longitudinal profile is established by Lagrangian mechanics. Finally, the motion simulation of WUSV and comparative analysis are completed by setting different inputs of sea state. Simulation results show that the WUSV dynamic model can correctly reflect the WUSV longitudinal motion process, and the results are consistent with the wave theory.

  12. A novel approach to model exposure of coastal-marine ecosystems to riverine flood plumes based on remote sensing techniques.

    Science.gov (United States)

    Álvarez-Romero, Jorge G; Devlin, Michelle; Teixeira da Silva, Eduardo; Petus, Caroline; Ban, Natalie C; Pressey, Robert L; Kool, Johnathan; Roberts, Jason J; Cerdeira-Estrada, Sergio; Wenger, Amelia S; Brodie, Jon

    2013-04-15

    Increased loads of land-based pollutants are a major threat to coastal-marine ecosystems. Identifying the affected marine areas and the scale of influence on ecosystems is critical to assess the impacts of degraded water quality and to inform planning for catchment management and marine conservation. Studies using remotely-sensed data have contributed to our understanding of the occurrence and influence of river plumes, and to our ability to assess exposure of marine ecosystems to land-based pollutants. However, refinement of plume modeling techniques is required to improve risk assessments. We developed a novel, complementary, approach to model exposure of coastal-marine ecosystems to land-based pollutants. We used supervised classification of MODIS-Aqua true-color satellite imagery to map the extent of plumes and to qualitatively assess the dispersal of pollutants in plumes. We used the Great Barrier Reef (GBR), the world's largest coral reef system, to test our approach. We combined frequency of plume occurrence with spatially distributed loads (based on a cost-distance function) to create maps of exposure to suspended sediment and dissolved inorganic nitrogen. We then compared annual exposure maps (2007-2011) to assess inter-annual variability in the exposure of coral reefs and seagrass beds to these pollutants. We found this method useful to map plumes and qualitatively assess exposure to land-based pollutants. We observed inter-annual variation in exposure of ecosystems to pollutants in the GBR, stressing the need to incorporate a temporal component into plume exposure/risk models. Our study contributes to our understanding of plume spatial-temporal dynamics of the GBR and offers a method that can also be applied to monitor exposure of coastal-marine ecosystems to plumes and explore their ecological influences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  14. Reducing software security risk through an integrated approach research initiative model based verification of the Secure Socket Layer (SSL) Protocol

    Science.gov (United States)

    Powell, John D.

    2003-01-01

    This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.

  15. Reducing software security risk through an integrated approach research initiative model based verification of the Secure Socket Layer (SSL) Protocol

    Science.gov (United States)

    Powell, John D.

    2003-01-01

    This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.

  16. Assessing the Effectiveness of Payments for Ecosystem Services: an Agent-Based Modeling Approach

    Directory of Open Access Journals (Sweden)

    Xiaodong Chen

    2014-03-01

    Full Text Available Payments for ecosystem services (PES have increasingly been implemented to protect and restore ecosystems worldwide. The effectiveness of conservation investments in PES may differ under alternative policy scenarios and may not be sustainable because of uncertainties in human responses to policies and dynamic human-nature interactions. To assess the impacts of these interactions on the effectiveness of PES programs, we developed a spatially explicit agent-based model: human and natural interactions under policies (HANIP. We used HANIP to study the effectiveness of China's Natural Forest Conservation Program (NFCP and alternative policy scenarios in a coupled human-nature system, China's Wolong Nature Reserve, where indigenous people's use of fuelwood affects forests. We estimated the effects of the current NFCP, which provides a cash payment, and an alternative payment scenario that provides an electricity payment by comparing forest dynamics under these policies to forest dynamics under a scenario in which no payment is provided. In 2007, there were 337 km² of forests in the study area of 515 km². Under the baseline projection in which no payment is provided, the forest area is expected to be 234 km² in 2030. Under the current NFCP, there are likely to be 379 km² of forests in 2030, or an increase of 145 km² of forests to the baseline projection. If the cash payment is replaced with an electricity payment, there are likely to be 435 km² of forests in 2030, or an increase of 201 km² of forests to the baseline projection. However, the effectiveness of the NFCP may be threatened by the behavior of newly formed households if they are not included in the payment scheme. In addition, the effects of socio-demographic factors on forests will also differ under different policy scenarios. Human and natural interactions under policies (HANIP and its modeling framework may also be used to assess the effectiveness of many other PES programs around

  17. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  18. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  19. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  20. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  1. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Directory of Open Access Journals (Sweden)

    Matteo Pappalardo

    Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and mole