WorldWideScience

Sample records for modeling approaches based

  1. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  2. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  3. Wind Turbine Control: Robust Model Based Approach

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood

    . This is because, on the one hand, control methods can decrease the cost of energy by keeping the turbine close to its maximum efficiency. On the other hand, they can reduce structural fatigue and therefore increase the lifetime of the wind turbine. The power produced by a wind turbine is proportional...... to the square of its rotor radius, therefore it seems reasonable to increase the size of the wind turbine in order to capture more power. However as the size increases, the mass of the blades increases by cube of the rotor size. This means in order to keep structural feasibility and mass of the whole structure...... reasonable, the ratio of mass to size should be reduced. This trend results in more flexible structures. Control of the flexible structure of a wind turbine in a wind field with stochastic nature is very challenging. In this thesis we are examining a number of robust model based methods for wind turbine...

  4. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  5. A Comparison of Filter-based Approaches for Model-based Prognostics

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is...

  6. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  7. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model......-based synthesis method must employ models at lower levels of aggregation and through combination rules for phenomena, generate (synthesize) new intensified unit operations. An efficient solution procedure for the synthesis problem is needed to tackle the potentially large number of options that would be obtained...

  8. Hidden Markov model-based approach for generation of Pitman ...

    Indian Academy of Sciences (India)

    Speech is one of the most basic means of human communication. ... human beings is carried out with the aid of communication and has facilitated the development ... Hidden Markov model-based approach for generation of PSL symbols. 279. Table 1. PSL basic strokes and English consonants. English consonant.

  9. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  10. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  11. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  12. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  13. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  14. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  15. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  16. Probabilistic model-based approach for heart beat detection.

    Science.gov (United States)

    Chen, Hugh; Erol, Yusuf; Shen, Eric; Russell, Stuart

    2016-09-01

    Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an 88.8% rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to deleterious effects that manifest in a lower standard of care across clinics. This paper discusses a model-based probabilistic inference approach to estimate physiological variables at a detection level. We design a generative model that complies with a layman's understanding of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain. In order to evaluate our algorithm we look at the application of heart beat detection using four datasets provided by PhysioNet, a research resource for complex physiological signals, in the form of the PhysioNet 2014 Challenge set-p1 and set-p2, the MIT-BIH Polysomnographic Database, and the MGH/MF Waveform Database. On these data sets our algorithm performs on par with the other top six submissions to the PhysioNet 2014 challenge. The overall evaluation scores in terms of sensitivity and positive predictivity values obtained were as follows: set-p1 (99.72%), set-p2 (93.51%), MIT-BIH (99.66%), and MGH/MF (95.53%). These scores are based on the averaging of gross sensitivity, gross positive predictivity, average sensitivity, and average positive predictivity.

  17. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey......-box models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey...

  18. An approach to accidents modeling based on compounds road environments.

    Science.gov (United States)

    Fernandes, Ana; Neves, Jose

    2013-04-01

    The most common approach to study the influence of certain road features on accidents has been the consideration of uniform road segments characterized by a unique feature. However, when an accident is related to the road infrastructure, its cause is usually not a single characteristic but rather a complex combination of several characteristics. The main objective of this paper is to describe a methodology developed in order to consider the road as a complete environment by using compound road environments, overcoming the limitations inherented in considering only uniform road segments. The methodology consists of: dividing a sample of roads into segments; grouping them into quite homogeneous road environments using cluster analysis; and identifying the influence of skid resistance and texture depth on road accidents in each environment by using generalized linear models. The application of this methodology is demonstrated for eight roads. Based on real data from accidents and road characteristics, three compound road environments were established where the pavement surface properties significantly influence the occurrence of accidents. Results have showed clearly that road environments where braking maneuvers are more common or those with small radii of curvature and high speeds require higher skid resistance and texture depth as an important contribution to the accident prevention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach.

    Science.gov (United States)

    Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong

    2015-01-01

    Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality.

  20. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Leve...

  1. Hidden Markov model-based approach for generation of Pitman ...

    Indian Academy of Sciences (India)

    In this paper, an approach for feature extraction using Mel frequency cep- stral coefficients (MFCC) and classification using hidden Markov models (HMM) for generating strokes comprising consonants and vowels (CV) in the process of production of Pitman shorthand language from spoken English is proposed. The.

  2. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  3. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  4. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  5. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    Science.gov (United States)

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  6. Teaching EFL Writing: An Approach Based on the Learner's Context Model

    Science.gov (United States)

    Lin, Zheng

    2017-01-01

    This study aims to examine qualitatively a new approach to teaching English as a foreign language (EFL) writing based on the learner's context model. It investigates the context model-based approach in class and identifies key characteristics of the approach delivered through a four-phase teaching and learning cycle. The model collects research…

  7. An Agent-Based Approach to Modeling Online Social Influence

    NARCIS (Netherlands)

    Maanen, P.P. van; Vecht, B. van der

    2013-01-01

    The aim of this study is to better understand social influence in online social media. Therefore, we propose a method in which we implement, validate and improve an individual behavior model. The behavior model is based on three fundamental behavioral principles of social influence from the

  8. A model-based approach for fault-tolerant control

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2010-01-01

    A model-based controller architecture for faulttolerant control (FTC) is presented in this paper. The controller architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. The FTC architecture consists of two central parts, fault detection and isolation (FDI) part and a con...

  9. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  10. Model-Based Approaches for Teaching and Practicing Personality Assessment.

    Science.gov (United States)

    Blais, Mark A; Hopwood, Christopher J

    2017-01-01

    Psychological assessment is a complex professional skill. Competence in assessment requires an extensive knowledge of personality, neuropsychology, social behavior, and psychopathology, a background in psychometrics, familiarity with a range of multimethod tools, cognitive flexibility, skepticism, and interpersonal sensitivity. This complexity makes assessment a challenge to teach and learn, particularly as the investment of resources and time in assessment has waned in psychological training programs over the last few decades. In this article, we describe 3 conceptual models that can assist teaching and learning psychological assessments. The transtheoretical model of personality provides a personality systems-based framework for understanding how multimethod assessment data relate to major personality systems and can be combined to describe and explain complex human behavior. The quantitative psychopathology-personality trait model is an empirical model based on the hierarchical organization of individual differences. Application of this model can help students understand diagnostic comorbidity and symptom heterogeneity, focus on more meaningful high-order domains, and identify the most effective assessment tools for addressing a given question. The interpersonal situation model is rooted in interpersonal theory and can help students connect test data to here-and-now interactions with patients. We conclude by demonstrating the utility of these models using a case example.

  11. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    Science.gov (United States)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  12. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  13. Oscillator-based assistance of cyclical movements: model-based and model-free approaches

    OpenAIRE

    Ronsse, Renaud; Lenzi, Tommaso; Vitiello, Nicola; Koopman, Bram; van Asseldonk, Edwin; De Rossi, Stefano Marco Maria; van den Kieboom, Jesse; van der Kooij, Herman; Carrozza, Maria Chiara; Ijspeert, Auke Jan

    2011-01-01

    In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot's own encoders. The approach is based on adaptive oscillators, i.e., mathematical tools that are capable of learning the high level features (frequency, envelope, etc.) of a periodic input signal. Here we present two experiments th...

  14. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  15. Modeling collective emotions: a stochastic approach based on Brownian agents

    International Nuclear Information System (INIS)

    Schweitzer, F.

    2010-01-01

    We develop a agent-based framework to model the emergence of collective emotions, which is applied to online communities. Agents individual emotions are described by their valence and arousal. Using the concept of Brownian agents, these variables change according to a stochastic dynamics, which also considers the feedback from online communication. Agents generate emotional information, which is stored and distributed in a field modeling the online medium. This field affects the emotional states of agents in a non-linear manner. We derive conditions for the emergence of collective emotions, observable in a bimodal valence distribution. Dependent on a saturated or a super linear feedback between the information field and the agent's arousal, we further identify scenarios where collective emotions only appear once or in a repeated manner. The analytical results are illustrated by agent-based computer simulations. Our framework provides testable hypotheses about the emergence of collective emotions, which can be verified by data from online communities. (author)

  16. Oscillator-based assistance of cyclical movements: model-based and model-free approaches.

    Science.gov (United States)

    Ronsse, Renaud; Lenzi, Tommaso; Vitiello, Nicola; Koopman, Bram; van Asseldonk, Edwin; De Rossi, Stefano Marco Maria; van den Kieboom, Jesse; van der Kooij, Herman; Carrozza, Maria Chiara; Ijspeert, Auke Jan

    2011-10-01

    In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot's own encoders. The approach is based on adaptive oscillators, i.e., mathematical tools that are capable of learning the high level features (frequency, envelope, etc.) of a periodic input signal. Here we present two experiments that we recently conducted to validate our approach: a simple sinusoidal movement of the elbow, that we designed as a proof-of-concept, and a walking experiment. In both cases, we collected evidence illustrating that our approach indeed assisted healthy subjects during movement execution. Owing to the intrinsic periodicity of daily life movements involving the lower-limbs, we postulate that our approach holds promise for the design of innovative rehabilitation and assistance protocols for the lower-limb, requiring little to no user-specific calibration.

  17. Possibility of object recognition using Altera's model based design approach

    International Nuclear Information System (INIS)

    Tickle, A J; Harvey, P K; Smith, J S; Wu, F

    2009-01-01

    Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.

  18. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  19. Embedded Control System Design A Model Based Approach

    CERN Document Server

    Forrai, Alexandru

    2013-01-01

    Control system design is a challenging task for practicing engineers. It requires knowledge of different engineering fields, a good understanding of technical specifications and good communication skills. The current book introduces the reader into practical control system design, bridging  the gap between theory and practice.  The control design techniques presented in the book are all model based., considering the needs and possibilities of practicing engineers. Classical control design techniques are reviewed and methods are presented how to verify the robustness of the design. It is how the designed control algorithm can be implemented in real-time and tested, fulfilling different safety requirements. Good design practices and the systematic software development process are emphasized in the book according to the generic standard IEC61508. The book is mainly addressed to practicing control and embedded software engineers - working in research and development – as well as graduate students who are face...

  20. Raster-Based Approach to Solar Pressure Modeling

    Science.gov (United States)

    Wright, Theodore W. II

    2013-01-01

    shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  1. Model-Based Approaches to Active Perception and Control

    Directory of Open Access Journals (Sweden)

    Giovanni Pezzulo

    2017-06-01

    Full Text Available There is an on-going debate in cognitive (neuro science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es” views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops.

  2. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models an...

  3. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  4. Intelligence and the brain: a model-based approach

    NARCIS (Netherlands)

    Kievit, R.A.; van Rooijen, H.; Wicherts, J.M.; Waldorp, L.J.; Kan, K.-J.; Scholte, H.S.; Borsboom, D.

    2012-01-01

    Various biological correlates of general intelligence (g) have been reported. Despite this, however, the relationship between neurological measurements and g is not fully clear. We use structural equation modeling to model the relationship between behavioral Wechsler Adult Intelligence Scale (WAIS)

  5. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  6. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  7. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  8. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication

    Directory of Open Access Journals (Sweden)

    Ching-Han Yang

    2018-03-01

    Full Text Available All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication—an equal error rate (EER of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment—confirm the feasibility of this approach.

  9. Physiology-based modelling approaches to characterize fish habitat suitability

    NARCIS (Netherlands)

    Teal, L.R.; Marras, Stefano; Peck, M.A.; Domenici, Paolo

    2018-01-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts

  10. Eco-Logic: Logic-Based Approaches to Ecological Modelling

    Science.gov (United States)

    Daniel L. Schmoldt

    1991-01-01

    This paper summarizes the simulation research carried out during 1984-1989 at the University of Edinburgh. Two primary objectives of their research are 1) to provide tools for manipulating simulation models (i.e., implementation tools) and 2) to provide advice on conceptualizing real-world phenomena into an idealized representation for simulation (i.e., model design...

  11. Approaches to analysis in model-based cognitive neuroscience

    NARCIS (Netherlands)

    Turner, B.M.; Forstmann, B.U.; Love, B.C.; Palmeri, T.J.; Van Maanen, L.

    Our understanding of cognition has been advanced by two traditionally non-overlapping and non-interacting groups. Mathematical psychologists rely on behavioral data to evaluate formal models of cognition, whereas cognitive neuroscientists rely on statistical models to understand patterns of neural

  12. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available reusability and flexibility of the LTI model. This model is viewed as a three-tier architecture, which for experimental purposes is composed of the following components: traffic generator, DiffServ network and the sink server. The Mediator pattern is used...

  13. Behavior-based network management: a unique model-based approach to implementing cyber superiority

    Science.gov (United States)

    Seng, Jocelyn M.

    2016-05-01

    Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.

  14. Excellent approach to modeling urban expansion by fuzzy cellular automata: agent base model

    Science.gov (United States)

    Khajavigodellou, Yousef; Alesheikh, Ali A.; Mohammed, Abdulrazak A. S.; Chapi, Kamran

    2014-09-01

    Recently, the interaction between humans and their environment is the one of important challenges in the world. Landuse/ cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. The complexity and dynamics of urban systems make the applicable practice of urban modeling very difficult. With the increased computational power and the greater availability of spatial data, micro-simulation such as the agent based and cellular automata simulation methods, has been developed by geographers, planners, and scholars, and it has shown great potential for representing and simulating the complexity of the dynamic processes involved in urban growth and land use change. This paper presents Fuzzy Cellular Automata in Geospatial Information System and remote Sensing to simulated and predicted urban expansion pattern. These FCA-based dynamic spatial urban models provide an improved ability to forecast and assess future urban growth and to create planning scenarios, allowing us to explore the potential impacts of simulations that correspond to urban planning and management policies. A fuzzy inference guided cellular automata approach. Semantic or linguistic knowledge on Land use change is expressed as fuzzy rules, based on which fuzzy inference is applied to determine the urban development potential for each pixel. The model integrates an ABM (agent-based model) and FCA (Fuzzy Cellular Automata) to investigate a complex decision-making process and future urban dynamic processes. Based on this model rapid development and green land protection under the influences of the behaviors and decision modes of regional authority agents, real estate developer agents, resident agents and non- resident agents and their interactions have been applied to predict the future development patterns of the Erbil metropolitan region.

  15. Accrual based accounting implementation: An approach for modelling major decisions

    Directory of Open Access Journals (Sweden)

    Ratno Agriyanto

    2016-12-01

    Full Text Available Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual based accounting to the accrual basis of accounting information use for decision-making basis. We used the survey questionnaires. The data were processed by SEM using statistical software WarpPLS. The results showed that the implementation of the accrual based accounting in City Government Semarang has significantly positively associated with decision-making. Another important finding is the City Government officials of Semarang have personality, low tolerance of ambiguity is a negative effect on the relationship between the implementation of the accrual based accounting for decision making

  16. A crop model-based approach for sunflower yields

    Directory of Open Access Journals (Sweden)

    João Guilherme Dal Belo Leite

    2014-10-01

    Full Text Available Pushed by the Brazilian biodiesel policy, sunflower (Helianthus annuus L. production is becoming increasingly regarded as an option to boost farmers' income, particularly under semi-arid conditions. Biodiesel related opportunities increase the demand for decision-making information at different levels, which could be met by simulation models. This study aimed to evaluate the performance of the crop model OILCROP-SUN to simulate sunflower development and growth under Brazilian conditions and to explore sunflower water- and nitrogen-limited, water-limited and potential yield and yield variability over an array of sowing dates in the northern region of the state of Minas Gerais, Brazil. For model calibration, an experiment was conducted in which two sunflower genotypes (H358 and E122 were cultivated in a clayey soil. Growth components (leaf area index, above ground biomass, grain yield and development stages (crop phenology were measured. A database composed of 27 sunflower experiments from five Brazilian regions was used for model evaluation. The spatial yield distribution of sunflower was mapped using ordinary kriging in ArcGIS. The model simulated sunflower grain productivity satisfactorily (Root Mean Square Error ≈ 13 %. Simulated yields were relatively high (1,750 to 4,250 kg ha-1 and the sowing window was fairly wide (Oct to Feb for northwestern locations, where sunflower could be cultivated as a second crop (double cropping at the end of the rainy season. The hybrid H358 had higher yields for all simulated sowing dates, growth conditions and selected locations.

  17. Model-Based approaches to Human-Automation Systems Design

    DEFF Research Database (Denmark)

    Jamieson, Greg A.; Andersson, Jonas; Bisantz, Ann

    2012-01-01

    Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However...... (and reportedly one or two critics) can engage one another on several agreed questions about such frameworks. The goal is to aid non-aligned practitioners in choosing between alternative frameworks for their human-automation interaction design challenges....

  18. Oscillator-based assistance of cyclical movements: model-based and model-free approaches

    NARCIS (Netherlands)

    Ronsse, Renaud; Lenzi, Tommaso; Vitiello, Nicola; Koopman, Bram; van Asseldonk, Edwin H.F.; de Rossi, Stefano Marco Maria; van den Kieboom, Jesse; van der Kooij, Herman; Carozza, Maria Chiara; IJspeert, Auke Jan

    2011-01-01

    In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot’s own encoders. The approach is

  19. An enhanced matrix-free edge-based finite volume approach to model structures

    CSIR Research Space (South Africa)

    Suliman, Ridhwaan

    2010-01-01

    Full Text Available This paper presents the formulation, implementation and evaluation of an enhanced matrix free edge-based finite volume approach to model the mechanics of solids undergoing large non-linear deformations. The developed technology is evaluated via...

  20. Block-based approach to modelling of granulated fertilizers' quality

    DEFF Research Database (Denmark)

    Kohonen, J.; Reinikainen, S. P.; Høskuldsson, Agnar

    2009-01-01

    be defined through testing the flow rate with, e.g., seed drill. Besides the chemical composition, flowability can be considered as one of the most important characteristics. There are numerous factors affecting the flowability of a granulated fertilizer, several of them related to the particle size...... distribution. Particle size distribution of the granulated product has to be within the customer specification, but is also a highly significant factor to the quality, especially in the presence of moisture. This can affect numerous phenomena, inter alia, agglomeration and aggregation of granules and Ostwald...... size distribution. The goals are to find a reliable model for flowability using this data and to find the most important variables and to identify the effect of blocks to the quality....

  1. Model-Based approaches to Human-Automation Systems Design

    DEFF Research Database (Denmark)

    Jamieson, Greg A.; Andersson, Jonas; Bisantz, Ann

    2012-01-01

    , the frameworks are often adapted from other purposes, usually applied to a limited range of problems, sometimes not fully described in the open literature, and rarely critically reviewed in a manner acceptable to proponents and critics alike. The present paper introduces a panel session wherein these proponents...... (and reportedly one or two critics) can engage one another on several agreed questions about such frameworks. The goal is to aid non-aligned practitioners in choosing between alternative frameworks for their human-automation interaction design challenges.......Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However...

  2. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  3. A regularized, model-based approach to phase-based conductivity mapping using MRI.

    Science.gov (United States)

    Ropella, Kathleen M; Noll, Douglas C

    2017-11-01

    To develop a novel regularized, model-based approach to phase-based conductivity mapping that uses structural information to improve the accuracy of conductivity maps. The inverse of the three-dimensional Laplacian operator is used to model the relationship between measured phase maps and the object conductivity in a penalized weighted least-squares optimization problem. Spatial masks based on structural information are incorporated into the problem to preserve data near boundaries. The proposed Inverse Laplacian method was compared against a restricted Gaussian filter in simulation, phantom, and human experiments. The Inverse Laplacian method resulted in lower reconstruction bias and error due to noise in simulations than the Gaussian filter. The Inverse Laplacian method also produced conductivity maps closer to the measured values in a phantom and with reduced noise in the human brain, as compared to the Gaussian filter. The Inverse Laplacian method calculates conductivity maps with less noise and more accurate values near boundaries. Improving the accuracy of conductivity maps is integral for advancing the applications of conductivity mapping. Magn Reson Med 78:2011-2021, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. A model-based systems approach to pharmaceutical product-process design and analysis

    DEFF Research Database (Denmark)

    Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    This is a perspective paper highlighting the need for systematic model-based design and analysis in pharmaceutical product-process development. A model-based framework is presented and the role, development and use of models of various types are discussed together with the structure of the models...... for the product and the process. The need for a systematic modelling framework is highlighted together with modelling issues related to model identification, adaptation and extension. In the area of product design and analysis, predictive models are needed with a wide application range. In the area of process...... synthesis and design, the use of generic process models from which specific process models can be generated, is highlighted. The use of a multi-scale modelling approach to extend the application range of the property models is highlighted as well. Examples of different types of process models, model...

  5. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  6. New approaches in agent-based modeling of complex financial systems

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  7. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  8. Agent-based modeling: a new approach for theory building in social psychology.

    Science.gov (United States)

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  9. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  10. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Lenardo C. Silva

    2015-10-01

    Full Text Available Medical Cyber-Physical Systems (MCPS are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  11. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    Science.gov (United States)

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  12. An Array-based Approach to Modelling Production Management System Architectures

    DEFF Research Database (Denmark)

    Falster, Peter

    2000-01-01

    Several proposals to a conceptual framework for production management architecture are briefly reviewed. It is suggested that an array-based approach and a classic engineering-economic model, is used as tools for a conceptualisation of ideas. Traditional architectural design is usually based...

  13. A model-based approach to associate complexity and robustness in engineering systems

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; D. Frey, Daniel; Howard, Thomas J.

    2017-01-01

    Ever increasing functionality and complexity of products and systems challenge development companies in achieving high and consistent quality. A model-based approach is used to investigate the relationship between system complexity and system robustness. The measure for complexity is based...

  14. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  15. Comparison of individual-based modeling and population approaches for prediction of foodborne pathogens growth.

    Science.gov (United States)

    Augustin, Jean-Christophe; Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie

    2015-02-01

    Individual-based modeling (IBM) approach combined with the microenvironment modeling of vacuum-packed cold-smoked salmon was more effective to describe the variability of the growth of a few Listeria monocytogenes cells contaminating irradiated salmon slices than the traditional population models. The IBM approach was particularly relevant to predict the absence of growth in 25% (5 among 20) of artificially contaminated cold-smoked salmon samples stored at 8 °C. These results confirmed similar observations obtained with smear soft cheese (Ferrier et al., 2013). These two different food models were used to compare the IBM/microscale and population/macroscale modeling approaches in more global exposure and risk assessment frameworks taking into account the variability and/or the uncertainty of the factors influencing the growth of L. monocytogenes. We observed that the traditional population models significantly overestimate exposure and risk estimates in comparison to IBM approach when contamination of foods occurs with a low number of cells (population model were characterized by a great uncertainty. The overestimation was mainly linked to the ability of IBM to predict no growth situations rather than the consideration of microscale environment. On the other hand, when the aim of quantitative risk assessment studies is only to assess the relative impact of changes in control measures affecting the growth of foodborne bacteria, the two modeling approach gave similar results and the simplest population approach was suitable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  17. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  18. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  19. Physiology-based modelling approaches to characterize fish habitat suitability: Their usefulness and limitations

    Science.gov (United States)

    Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo

    2018-02-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe

  20. THE EFECTIVENESS OF RHETORIC-BASED ESSAY WRITING TEACHING MODEL WITH CONTEXTUAL APPROACH

    OpenAIRE

    Akbar, Akbar; HP, Achmad

    2015-01-01

    This study aims to develop a rhetoric–based essay writing teaching model with contextual approach in order to improve essay writing skills of students in the English Department of the Education and Teaching Faculty of Lakidende University of Konawe. This instructional model was developed by using research and development. The results show that the model can improve students’ essay writing skills effectively.. It was done in experimental class of the Education and Teaching Faculty of Lakidende...

  1. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  2. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  3. Symbolic Solution Approach to Wind Turbine based on Doubly Fed Induction Generator Model

    DEFF Research Database (Denmark)

    Cañas–Carretón, M.; Gómez–Lázaro, E.; Martín–Martínez, S.

    2015-01-01

    This paper describes an alternative approach based on symbolic computations to simulate wind turbines equipped with Doubly–Fed Induction Generator (DFIG). The actuator disk theory is used to represent the aerodynamic part, and the one-mass model simulates the mechanical part. The 5th–order induct......This paper describes an alternative approach based on symbolic computations to simulate wind turbines equipped with Doubly–Fed Induction Generator (DFIG). The actuator disk theory is used to represent the aerodynamic part, and the one-mass model simulates the mechanical part. The 5th......–order induction generator is selected to model the electric machine, being this approach suitable to estimate the DFIG performance under transient conditions. The corresponding non–linear integro-differential equation system has been reduced to a linear state-space system by using an ad-hoc local linearization...

  4. A Hypothesis-based Approach to Hydrological Model Development: The Case for Flexible Model Structures

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Fenicia, F.

    2010-12-01

    Ambiguities in the appropriate representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. This current overabundance of models is symptomatic of insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in quantifying the impact of sub-catchment heterogeneities on the catchment’s hydrological response. In this presentation we advocate the use of flexible modeling frameworks during the development and subsequent refinement of catchment-scale hydrological models. We argue that the ability of flexible modeling frameworks to decompose a model into its constituent hypotheses, necessarily combined with incisive diagnostics to scrutinize these individual hypotheses against observed data, provides hydrologists with a very powerful and systematic approach for improving process representation in models. Flexible models also support a broader coverage of the model hypothesis space and hence facilitate a more comprehensive quantification of the predictive uncertainty associated with system and component non-identifiabilities that plague many model analyses. As part of our discussion of the advantages and limitations of flexible model frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from uncertainty in environmental data, and the challenge of understanding regional differences in dominant hydrological processes. We assess recent progress in these research directions, and how such progress can be exploited within flexible model applications to advance the community’s quest for more scientifically defensible catchment-scale hydrological models.

  5. A fuzzy-logic-based approach to qualitative safety modelling for marine systems

    International Nuclear Information System (INIS)

    Sii, H.S.; Ruxton, Tom; Wang Jin

    2001-01-01

    Safety assessment based on conventional tools (e.g. probability risk assessment (PRA)) may not be well suited for dealing with systems having a high level of uncertainty, particularly in the feasibility and concept design stages of a maritime or offshore system. By contrast, a safety model using fuzzy logic approach employing fuzzy IF-THEN rules can model the qualitative aspects of human knowledge and reasoning processes without employing precise quantitative analyses. A fuzzy-logic-based approach may be more appropriately used to carry out risk analysis in the initial design stages. This provides a tool for working directly with the linguistic terms commonly used in carrying out safety assessment. This research focuses on the development and representation of linguistic variables to model risk levels subjectively. These variables are then quantified using fuzzy sets. In this paper, the development of a safety model using fuzzy logic approach for modelling various design variables for maritime and offshore safety based decision making in the concept design stage is presented. An example is used to illustrate the proposed approach

  6. Bounded Rational Managers Struggle with Talent Management - An Agent-based Modelling Approach

    DEFF Research Database (Denmark)

    Adamsen, Billy; Thomsen, Svend Erik

    This study applies an agent-based modeling approach to explore some aspects of an important managerial task: finding and cultivating talented individuals capable of creating value for their organization at some future state. Given that the term talent in talent management is an empty signifier...... method for studying this type of problems. The approach is particularly suitable to topics where understanding processes and their consequences is important. Agent-based models can include agents that are heterogeneous in their features and abilities, and can deal directly with the consequences...... and its denotative meaning floating, we propose that bounded rational managers base their decisions on a simple heuristic, i.e. selecting and cultivating individuals so that their capabilities resemble their own capabilities the most (Adamsen 2015). We model the consequences of this talent management...

  7. THE EFECTIVENESS OF RHETORIC-BASED ESSAY WRITING TEACHING MODEL WITH CONTEXTUAL APPROACH

    Directory of Open Access Journals (Sweden)

    - Akbar

    2015-06-01

    Full Text Available This study aims to develop a rhetoric–based essay writing teaching model with contextual approach in order to improve essay writing skills of students in the English Department of the Education and Teaching Faculty of Lakidende University of Konawe. This instructional model was developed by using research and development. The results show that the model can improve students’ essay writing skills effectively.. It was done in experimental class of the Education and Teaching Faculty of Lakidende University of Konawe Southeast Sulawesi province of Indonesia with the score of 69,80. Thus, it can be concluded that the rhetoric–based essay writing teaching model with contextual approach that has been developed can improve the essay writing skills of students of English Department. It was proper.

  8. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    Science.gov (United States)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  9. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  10. A component-based approach to integrated modeling in the geosciences: The design of CSDMS

    Science.gov (United States)

    Peckham, Scott D.; Hutton, Eric W. H.; Norris, Boyana

    2013-04-01

    Development of scientific modeling software increasingly requires the coupling of multiple, independently developed models. Component-based software engineering enables the integration of plug-and-play components, but significant additional challenges must be addressed in any specific domain in order to produce a usable development and simulation environment that also encourages contributions and adoption by entire communities. In this paper we describe the challenges in creating a coupling environment for Earth-surface process modeling and the innovative approach that we have developed to address them within the Community Surface Dynamics Modeling System.

  11. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Camelia Florela Voinea

    2014-07-01

    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.

  12. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Voinea, Camelia Florela

    2014-07-01

    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.  

  13. Comparing Euler-Euler and Euler-Lagrange based modelling approaches for gas-particle flows

    OpenAIRE

    Braun, Markus; Lamert, Markus; Ozarkar, Shailesh; Sanyal, Jay

    2015-01-01

    Comparative assessment of Euler-Euler and Euler-Lagrange modelling approaches for gas-particle flows is performed by comparing their predictions against experimental data of two fluidization challenge problems put forth by National Energy Technology Laboratory (NETL), Morgantown, WV, USA. The first fluidization challenge problem is based on a laboratory scale fluidized bed while the second fluidization challenge problem is based on a pilot scale circulating fluidized bed. It is found that bot...

  14. Blended Risk Approach in Applying PSA Models to Risk-Based Regulations

    International Nuclear Information System (INIS)

    Dimitrijevic, V. B.; Chapman, J. R.

    1996-01-01

    In this paper, the authors will discuss a modern approach in applying PSA models in risk-based regulation. The Blended Risk Approach is a combination of traditional and probabilistic processes. It is receiving increased attention in different industries in the U. S. and abroad. The use of the deterministic regulations and standards provides a proven and well understood basis on which to assess and communicate the impact of change to plant design and operation. Incorporation of traditional values into risk evaluation is working very well in the blended approach. This approach is very application specific. It includes multiple risk attributes, qualitative risk analysis, and basic deterministic principles. In blending deterministic and probabilistic principles, this approach ensures that the objectives of the traditional defense-in-depth concept are not compromised and the design basis of the plant is explicitly considered. (author)

  15. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  16. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  17. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  18. Modelling and simulation of electrical energy systems through a complex systems approach using agent-based models

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, Enrique

    2013-10-01

    Complexity science aims to better understand the processes of both natural and man-made systems which are composed of many interacting entities at different scales. A disaggregated approach is proposed for simulating electricity systems, by using agent-based models coupled to continuous ones. The approach can help in acquiring a better understanding of the operation of the system itself, e.g. on emergent phenomena or scale effects; as well as in the improvement and design of future smart grids.

  19. A high speed model-based approach for wavefront sensorless adaptive optics systems

    Science.gov (United States)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  20. Hysteresis Nonlinearity Identification Using New Preisach Model-Based Artificial Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Zakerzadeh

    2011-01-01

    Full Text Available Preisach model is a well-known hysteresis identification method in which the hysteresis is modeled by linear combination of hysteresis operators. Although Preisach model describes the main features of system with hysteresis behavior, due to its rigorous numerical nature, it is not convenient to use in real-time control applications. Here a novel neural network approach based on the Preisach model is addressed, provides accurate hysteresis nonlinearity modeling in comparison with the classical Preisach model and can be used for many applications such as hysteresis nonlinearity control and identification in SMA and Piezo actuators and performance evaluation in some physical systems such as magnetic materials. To evaluate the proposed approach, an experimental apparatus consisting one-dimensional flexible aluminum beam actuated with an SMA wire is used. It is shown that the proposed ANN-based Preisach model can identify hysteresis nonlinearity more accurately than the classical one. It also has powerful ability to precisely predict the higher-order hysteresis minor loops behavior even though only the first-order reversal data are in use. It is also shown that to get the same precise results in the classical Preisach model, many more data should be used, and this directly increases the experimental cost.

  1. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  2. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)

    2012-07-01

    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  3. Comprehensive Stability Evaluation of Rock Slope Using the Cloud Model-Based Approach

    Science.gov (United States)

    Liu, Zaobao; Shao, Jianfu; Xu, Weiya; Xu, Fei

    2014-11-01

    This article presents the cloud model-based approach for comprehensive stability evaluation of complicated rock slopes of hydroelectric stations in mountainous area. This approach is based on membership cloud models which can account for randomness and fuzziness in slope stability evaluation. The slope stability is affected by various factors and each of which is ranked into five grades. The ranking factors are sorted into four categories. The ranking system of slope stability is introduced and then the membership cloud models are applied to analyze each ranking factor for generating cloud memberships. Afterwards, the obtained cloud memberships are synthesized with the factor weights given by experts for comprehensive stability evaluation of rock slopes. The proposed approach is used for the stability evaluation of the left abutment slope in Jinping 1 Hydropower Station. It is shown that the cloud model-based strategy can well consider the effects of each ranking factor and therefore is feasible and reliable for comprehensive stability evaluation of rock slopes.

  4. Climatological assessment of maritime atmospheric profiles: model-based and LIDAR-based approaches

    Science.gov (United States)

    McBryde, Kevin; Hammel, Stephen; Campbell, James

    2017-09-01

    Local meteorological conditions drive variability of vertical extinction profiles over both short and long timescales. Wind speed and relative humidity, in particular, are associated with production modes for maritime aerosols. We model climatological variability of profiles based upon surface layer historical measurements of meteorological parameters using the International Comprehensive Ocean Atmosphere Data Set (ICOADS). We have generated a database of profiles using a unique methodology, optimizing computational time by computing profiles over a mesh of relative humidity and wind speed. The profiles are weighted and sorted based upon ICOADS data for a region in southern California coastal waters. Climatological vertical extinction profiles based on this methodology are computed using the aerosol model LEEDR and compared with a new database of space-based LIDAR profiles from the CALIOP instrument aboard NASA's CALIPSO satellite. We also compare Aerosol Optical Depth (AOD) among CALIOP, LEEDR, and the Aerosol Robotic Network (AERONET), a network of ground-based sun photometers. We discuss agreement and discrepancies among the three datasets.

  5. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  6. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Tung, Chang-Shung; Faeder, James R; Lopez, Carlos F; Hlavacek, William S

    2014-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and posttranslational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). © 2013 Wiley Periodicals, Inc.

  7. A theoretical approach to room acoustic simulations based on a radiative transfer model

    DEFF Research Database (Denmark)

    Ruiz-Navarro, Juan-Miguel; Jacobsen, Finn; Escolano, José

    2010-01-01

    A theoretical approach to room acoustic simulations based on a radiative transfer model is developed by adapting the classical radiative transfer theory from optics to acoustics. The proposed acoustic radiative transfer model expands classical geometrical room acoustic modeling algorithms...... by incorporating a propagation medium that absorbs and scatters radiation, handling both diffuse and non-diffuse reflections on boundaries and objects in the room. The main scope of this model is to provide a proper foundation for a wide number of room acoustic simulation models, in order to establish and unify...... their principles. It is shown that this room acoustic modeling technique establishes the basis of two recently proposed algorithms, the acoustic diffusion equation and the room acoustic rendering equation. Both methods are derived in detail using an analytical approximation and a simplified integral equation...

  8. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  9. Analysis of factors affecting satisfaction level on problem based learning approach using structural equation modeling

    Science.gov (United States)

    Hussain, Nur Farahin Mee; Zahid, Zalina

    2014-12-01

    Nowadays, in the job market demand, graduates are expected not only to have higher performance in academic but they must also be excellent in soft skill. Problem-Based Learning (PBL) has a number of distinct advantages as a learning method as it can deliver graduates that will be highly prized by industry. This study attempts to determine the satisfaction level of engineering students on the PBL Approach and to evaluate their determinant factors. The Structural Equation Modeling (SEM) was used to investigate how the factors of Good Teaching Scale, Clear Goals, Student Assessment and Levels of Workload affected the student satisfaction towards PBL approach.

  10. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    Science.gov (United States)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  11. An ontology-based hierarchical semantic modeling approach to clinical pathway workflows.

    Science.gov (United States)

    Ye, Yan; Jiang, Zhibin; Diao, Xiaodi; Yang, Dong; Du, Gang

    2009-08-01

    This paper proposes an ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services. A clinical pathway ontology (CPO) is formally defined in OWL web ontology language (OWL) to provide common semantic foundation for meaningful representation and exchange of pathway-related knowledge. A CPO-based semantic modeling method is then presented to describe clinical pathways as interconnected hierarchical models including the top-level outcome flow and intervention workflow level along a care timeline. Furthermore, relevant temporal knowledge can be fully represented by combing temporal entities in CPO and temporal rules based on semantic web rule language (SWRL). An illustrative example about a clinical pathway for cesarean section shows the applicability of the proposed methodology in enabling structured semantic descriptions of any real clinical pathway.

  12. Application of Transfer Matrix Approach to Modeling and Decentralized Control of Lattice-Based Structures

    Science.gov (United States)

    Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea

    2015-01-01

    This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.

  13. Value-based benefit design: using a predictive modeling approach to improve compliance.

    Science.gov (United States)

    Mahoney, John J

    2008-07-01

    Increased medication compliance rates have been demonstrated to result in improved clinical outcomes and reduced overall medical expenditures. As such, managed care stakeholders should take the total value approach to benefit design and consider total medical costs beyond the cost of pharmacotherapy alone. To describe the value-based benefit design employed by Pitney Bowes (specifically, the predictive modeling approach), to improve medication compliance, and to report the results of this intervention. Despite significant skepticism surrounding value-based benefit design, there is growing evidence that these plans can be used in conjunction with careful pharmacy management. In fact, value-based design provides a different lever on pharmacy management and allows for the appropriate drug to be channeled to the appropriate person. Studies demonstrating the adverse impact of high coinsurance levels further augment the argument for value-based benefit design. Value-based benefit design was employed at Pitney Bowes, a $6.1-billion global provider of integrated mailstream solutions, with noticeable success. Patients were either placed in a disease management program or in a secondary program promoting preventive care. The company selectively cut copays to achieve that end, and this total value approach translated into significant savings. To develop a successful value-based benefit design, stakeholders cannot simply cut costs or cut copays. Action must be taken as part of a concerted program, coupled with disease management or similar interventions. "Value based" means that positive outcomes are the ultimate goal, and barriers to those positive outcomes must be addressed.

  14. Comparing administered and market-based water allocation systems using an agent-based modeling approach

    Science.gov (United States)

    Zhao, J.; Cai, X.; Wang, Z.

    2009-12-01

    It also has been well recognized that market-based systems can have significant advantages over administered systems for water allocation. However there are not many successful water markets around the world yet and administered systems exist commonly in water allocation management practice. This paradox has been under discussion for decades and still calls for attention for both research and practice. This paper explores some insights for the paradox and tries to address why market systems have not been widely implemented for water allocation. Adopting the theory of agent-based system we develop a consistent analytical model to interpret both systems. First we derive some theorems based on the analytical model, with respect to the necessary conditions for economic efficiency of water allocation. Following that the agent-based model is used to illustrate the coherence and difference between administered and market-based systems. The two systems are compared from three aspects: 1) the driving forces acting on the system state, 2) system efficiency, and 3) equity. Regarding economic efficiency, penalty on the violation of water use permits (or rights) under an administered system can lead to system-wide economic efficiency, as well as being acceptable by some agents, which follows the theory of the so-call rational violation. Ideal equity will be realized if penalty equals incentive with an administered system and if transaction costs are zero with a market system. The performances of both agents and the over system are explained with an administered system and market system, respectively. The performances of agents are subject to different mechanisms of interactions between agents under the two systems. The system emergency (i.e., system benefit, equilibrium market price, etc), resulting from the performance at the agent level, reflects the different mechanism of the two systems, the “invisible hand” with the market system and administrative measures (penalty

  15. D Modeling of Two Louteria Fragments by Image-Based Approach

    Science.gov (United States)

    Ebolese, D.; Lo Brutto, M.; Burgio, A.

    2017-05-01

    The paper presents a digital approach to the reconstruction and analysis of two small-sized fragments of louteria, a kind of large terracotta vase, found during an archaeological survey in the south of Sicily (Italy), in the area of Cignana near the Greek colony of Akragas (nowadays Agrigento). The fragments of louteria have been studied by an image-based approach in order to achieve high accurate and very detailed 3D models. The 3D models have been used to carry out interpretive and geometric analysis from an archaeological point of view. Using different digital tools, it was possible to highlight some fine details of the louteria decorations and to better understand the characteristics of the two fragments. The 3D models provide also the possibility to study and to document these archaeological finds in a digital environment.

  16. Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning

    Science.gov (United States)

    Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.

    2011-01-01

    We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788

  17. An individual-based modelling approach to estimate landscape connectivity for bighorn sheep (Ovis canadensis

    Directory of Open Access Journals (Sweden)

    Corrie H. Allen

    2016-05-01

    Full Text Available Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from

  18. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Deepak Bhatt

    2012-07-01

    Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.

  19. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    DEFF Research Database (Denmark)

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter

    2008-01-01

    Positioning solutions in infrastructure-based wireless networks generally operate by exploiting the channel information of the links between the Wireless Devices and fixed networking Access Points. The major challenge of such solutions is the modeling of both the noise properties of the channel...... measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...... a segment-wise linear model, called the Drifting Points model. The model results in an increased performance when compared with traditional solutions....

  20. Application of meandering centreline migration modelling and object-based approach of Long Nab member

    Science.gov (United States)

    Saadi, Saad

    2017-04-01

    Characterizing the complexity and heterogeneity of the geometries and deposits in meandering river system is an important concern for the reservoir modelling of fluvial environments. Re-examination of the Long Nab member in the Scalby formation of the Ravenscar Group (Yorkshire, UK), integrating digital outcrop data and forward modelling approaches, will lead to a geologically realistic numerical model of the meandering river geometry. The methodology is based on extracting geostatistics from modern analogous, meandering rivers that exemplify both the confined and non-confined meandering point bars deposits and morphodynamics of Long Nab member. The parameters derived from the modern systems (i.e. channel width, amplitude, radius of curvature, sinuosity, wavelength, channel length and migration rate) are used as a statistical control for the forward simulation and resulting object oriented channel models. The statistical data derived from the modern analogues is multi-dimensional in nature, making analysis difficult. We apply data mining techniques such as parallel coordinates to investigate and identify the important relationships within the modern analogue data, which can then be used drive the development of, and as input to the forward model. This work will increase our understanding of meandering river morphodynamics, planform architecture and stratigraphic signature of various fluvial deposits and features. We will then use these forward modelling based channel objects to build reservoir models, and compare the behaviour of the forward modelled channels with traditional object modelling in hydrocarbon flow simulations.

  1. TEACHING MATERIALS MODEL FOLKLORE IN LEARNING INDONESIAN BASED ON THEMATIK APPROACH

    Directory of Open Access Journals (Sweden)

    S Satinem

    2015-12-01

    Full Text Available Literature module in this research is regional literature module which is developed based on the previous module in teaching and learning Indonesia The aim of this research is to produce a model of module for Indonesia based on the thematic approach from folklore for the third grade students of elementary school in Lubuklinggau, In this research, further development was done based on teacher and students’ need analysis. Research and Development (R&D method is used in this research. This research combined research model of Brog& Gall and development model of Dick & Carey. Therefore, research and development is a process of developing and validating educational product. Educational product in this research refers to the syllabus, module, and lesson plan/instructional design. Based on the data gathered from the application of developmental plan of learning model which consist of syllabus, lesson plan, limited try out test, large try out test, model effectiveness test, and module readability test, it can be stated that the teaching and learning model which is developed can increase the third grade students’ achievement at elementary school in Indonesian teaching and learning with the source of regional literature.

  2. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  3. A sparse QSRR model for predicting retention indices of essential oils based on robust screening approach.

    Science.gov (United States)

    Al-Fakih, A M; Algamal, Z Y; Lee, M H; Aziz, M

    2017-08-01

    A robust screening approach and a sparse quantitative structure-retention relationship (QSRR) model for predicting retention indices (RIs) of 169 constituents of essential oils is proposed. The proposed approach is represented in two steps. First, dimension reduction was performed using the proposed modified robust sure independence screening (MR-SIS) method. Second, prediction of RIs was made using the proposed robust sparse QSRR with smoothly clipped absolute deviation (SCAD) penalty (RSQSRR). The RSQSRR model was internally and externally validated based on [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], Y-randomization test, [Formula: see text], [Formula: see text], and the applicability domain. The validation results indicate that the model is robust and not due to chance correlation. The descriptor selection and prediction performance of the RSQSRR for training dataset outperform the other two used modelling methods. The RSQSRR shows the highest [Formula: see text], [Formula: see text], and [Formula: see text], and the lowest [Formula: see text]. For the test dataset, the RSQSRR shows a high external validation value ([Formula: see text]), and a low value of [Formula: see text] compared with the other methods, indicating its higher predictive ability. In conclusion, the results reveal that the proposed RSQSRR is an efficient approach for modelling high dimensional QSRRs and the method is useful for the estimation of RIs of essential oils that have not been experimentally tested.

  4. Using A Model-Based Systems Engineering Approach For Exploration Medical System Development

    Science.gov (United States)

    Hanson, A.; Mindock, J.; McGuire, K.; Reilly, J.; Cerro, J.; Othon, W.; Rubin, D.; Urbina, M.; Canga, M.

    2017-01-01

    NASA's Human Research Program's Exploration Medical Capabilities (ExMC) element is defining the medical system needs for exploration class missions. ExMC's Systems Engineering (SE) team will play a critical role in successful design and implementation of the medical system into exploration vehicles. The team's mission is to "Define, develop, validate, and manage the technical system design needed to implement exploration medical capabilities for Mars and test the design in a progression of proving grounds." Development of the medical system is being conducted in parallel with exploration mission architecture and vehicle design development. Successful implementation of the medical system in this environment will require a robust systems engineering approach to enable technical communication across communities to create a common mental model of the emergent engineering and medical systems. Model-Based Systems Engineering (MBSE) improves shared understanding of system needs and constraints between stakeholders and offers a common language for analysis. The ExMC SE team is using MBSE techniques to define operational needs, decompose requirements and architecture, and identify medical capabilities needed to support human exploration. Systems Modeling Language (SysML) is the specific language the SE team is utilizing, within an MBSE approach, to model the medical system functional needs, requirements, and architecture. Modeling methods are being developed through the practice of MBSE within the team, and tools are being selected to support meta-data exchange as integration points to other system models are identified. Use of MBSE is supporting the development of relationships across disciplines and NASA Centers to build trust and enable teamwork, enhance visibility of team goals, foster a culture of unbiased learning and serving, and be responsive to customer needs. The MBSE approach to medical system design offers a paradigm shift toward greater integration between

  5. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  6. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  7. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  8. Real-time monitoring of photocytotoxicity in nanoparticles-based photodynamic therapy: a model-based approach.

    Science.gov (United States)

    Benachour, Hamanou; Bastogne, Thierry; Toussaint, Magali; Chemli, Yosra; Sève, Aymeric; Frochot, Céline; Lux, François; Tillement, Olivier; Vanderesse, Régis; Barberi-Heyob, Muriel

    2012-01-01

    Nanoparticles are widely suggested as targeted drug-delivery systems. In photodynamic therapy (PDT), the use of multifunctional nanoparticles as photoactivatable drug carriers is a promising approach for improving treatment efficiency and selectivity. However, the conventional cytotoxicity assays are not well adapted to characterize nanoparticles cytotoxic effects and to discriminate early and late cell responses. In this work, we evaluated a real-time label-free cell analysis system as a tool to investigate in vitro cyto- and photocyto-toxicity of nanoparticles-based photosensitizers compared with classical metabolic assays. To do so, we introduced a dynamic approach based on real-time cell impedance monitoring and a mathematical model-based analysis to characterize the measured dynamic cell response. Analysis of real-time cell responses requires indeed new modeling approaches able to describe suited use of dynamic models. In a first step, a multivariate analysis of variance associated with a canonical analysis of the obtained normalized cell index (NCI) values allowed us to identify different relevant time periods following nanoparticles exposure. After light irradiation, we evidenced discriminant profiles of cell index (CI) kinetics in a concentration- and light dose-dependent manner. In a second step, we proposed a full factorial design of experiments associated with a mixed effect kinetic model of the CI time responses. The estimated model parameters led to a new characterization of the dynamic cell responses such as the magnitude and the time constant of the transient phase in response to the photo-induced dynamic effects. These parameters allowed us to characterize totally the in vitro photodynamic response according to nanoparticle-grafted photosensitizer concentration and light dose. They also let us estimate the strength of the synergic photodynamic effect. This dynamic approach based on statistical modeling furnishes new insights for in vitro

  9. Real-time monitoring of photocytotoxicity in nanoparticles-based photodynamic therapy: a model-based approach.

    Directory of Open Access Journals (Sweden)

    Hamanou Benachour

    Full Text Available Nanoparticles are widely suggested as targeted drug-delivery systems. In photodynamic therapy (PDT, the use of multifunctional nanoparticles as photoactivatable drug carriers is a promising approach for improving treatment efficiency and selectivity. However, the conventional cytotoxicity assays are not well adapted to characterize nanoparticles cytotoxic effects and to discriminate early and late cell responses. In this work, we evaluated a real-time label-free cell analysis system as a tool to investigate in vitro cyto- and photocyto-toxicity of nanoparticles-based photosensitizers compared with classical metabolic assays. To do so, we introduced a dynamic approach based on real-time cell impedance monitoring and a mathematical model-based analysis to characterize the measured dynamic cell response. Analysis of real-time cell responses requires indeed new modeling approaches able to describe suited use of dynamic models. In a first step, a multivariate analysis of variance associated with a canonical analysis of the obtained normalized cell index (NCI values allowed us to identify different relevant time periods following nanoparticles exposure. After light irradiation, we evidenced discriminant profiles of cell index (CI kinetics in a concentration- and light dose-dependent manner. In a second step, we proposed a full factorial design of experiments associated with a mixed effect kinetic model of the CI time responses. The estimated model parameters led to a new characterization of the dynamic cell responses such as the magnitude and the time constant of the transient phase in response to the photo-induced dynamic effects. These parameters allowed us to characterize totally the in vitro photodynamic response according to nanoparticle-grafted photosensitizer concentration and light dose. They also let us estimate the strength of the synergic photodynamic effect. This dynamic approach based on statistical modeling furnishes new insights for in

  10. Analyzing energy consumption of wireless networks. A model-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Yue, Haidi

    2013-03-04

    During the last decades, wireless networking has been continuously a hot topic both in academy and in industry. Many different wireless networks have been introduced like wireless local area networks, wireless personal networks, wireless ad hoc networks, and wireless sensor networks. If these networks want to have a long term usability, the power consumed by the wireless devices in each of these networks needs to be managed efficiently. Hence, a lot of effort has been carried out for the analysis and improvement of energy efficiency, either for a specific network layer (protocol), or new cross-layer designs. In this thesis, we apply model-based approach for the analysis of energy consumption of different wireless protocols. The protocols under consideration are: one leader election protocol, one routing protocol, and two medium access control protocols. By model-based approach we mean that all these four protocols are formalized as some formal models, more precisely, as discrete-time Markov chains (DTMCs), Markov decision processes (MDPs), or stochastic timed automata (STA). For the first two models, DTMCs and MDPs, we model them in PRISM, a prominent model checker for probabilistic model checking, and apply model checking technique to analyze them. Model checking belongs to the family of formal methods. It discovers exhaustively all possible (reachable) states of the models, and checks whether these models meet a given specification. Specifications are system properties that we want to study, usually expressed by some logics, for instance, probabilistic computer tree logic (PCTL). However, while model checking relies on rigorous mathematical foundations and automatically explores the entire state space of a model, its applicability is also limited by the so-called state space explosion problem -- even systems of moderate size often yield models with an exponentially larger state space that thwart their analysis. Hence for the STA models in this thesis, since there

  11. Modeling a terminology-based electronic nursing record system: an object-oriented approach.

    Science.gov (United States)

    Park, Hyeoun-Ae; Cho, InSook; Byeun, NamSoo

    2007-10-01

    The aim of this study was to present our perspectives on healthcare information analysis at a conceptual level and the lessons learned from our experience with the development of a terminology-based enterprise electronic nursing record system - which was one of components in an EMR system at a tertiary teaching hospital in Korea - using an object-oriented system analysis and design concept. To ensure a systematic approach and effective collaboration, the department of nursing constituted a system modeling team comprising a project manager, systems analysts, user representatives, an object-oriented methodology expert, and healthcare informaticists (including the authors). A rational unified process (RUP) and the Unified Modeling Language were used as a development process and for modeling notation, respectively. From the scenario and RUP approach, user requirements were formulated into use case sets and the sequence of activities in the scenario was depicted in an activity diagram. The structure of the system was presented in a class diagram. This approach allowed us to identify clearly the structural and behavioral states and important factors of a terminology-based ENR system (e.g., business concerns and system design concerns) according to the viewpoints of both domain and technical experts.

  12. Genetic programming based models in plant tissue culture: An addendum to traditional statistical approach.

    Science.gov (United States)

    Mridula, Meenu R; Nair, Ashalatha S; Kumar, K Satheesh

    2018-02-01

    In this paper, we compared the efficacy of observation based modeling approach using a genetic algorithm with the regular statistical analysis as an alternative methodology in plant research. Preliminary experimental data on in vitro rooting was taken for this study with an aim to understand the effect of charcoal and naphthalene acetic acid (NAA) on successful rooting and also to optimize the two variables for maximum result. Observation-based modelling, as well as traditional approach, could identify NAA as a critical factor in rooting of the plantlets under the experimental conditions employed. Symbolic regression analysis using the software deployed here optimised the treatments studied and was successful in identifying the complex non-linear interaction among the variables, with minimalistic preliminary data. The presence of charcoal in the culture medium has a significant impact on root generation by reducing basal callus mass formation. Such an approach is advantageous for establishing in vitro culture protocols as these models will have significant potential for saving time and expenditure in plant tissue culture laboratories, and it further reduces the need for specialised background.

  13. Agent-Based Approach for Modelling the Labour Migration from China to Russia

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2017-06-01

    Full Text Available The article describes the process of labour migration from China to Russia and shows its modelling using the agent-based approach. This approach allows us to simulate an artificial society in a computer program taking into account the diversity of individuals under consideration, as well as to model a set of laws and rules of conduct that make up the institutional environment in which the members of this society live. A brief review and analysis of agent-based migration models presented in the foreign literature are given. The agent-based model of labour migration from China to Russia developed by the Central Economic Mathematical Institute of the Russian Academy of Sciences simulates human behaviour close to reality, which is based on their internal purposes, determining the agents choice of territory as a place of residence. Therefore, at the development of the agents of the model and their behaviour algorithms, as well as the organization of the environment in which they exist and interact, the main characteristics of the population of two neighbouring countries and their demographic processes have been considered. Using the model, two experiments have been conducted. The purpose of the first of them was to assess the effect of depreciation of the rubble against the yuan on the overall indexes of labour migration, as well as its structure. In the second experiment, the procedure of the search of the information by agents for the migratory decision-making was changing. Namely, all generalizing information on the average salary by types of activity and skill level of employees, both in China and Russia, became available to all agents irrespective of their qualification level.

  14. An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.

    Science.gov (United States)

    Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G

    1988-01-01

    An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.

  15. Thermomechanical Modeling of Sintered Silver - A Fracture Mechanics-based Approach: Extended Abstract: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Paret, Paul P [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DeVoto, Douglas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.

  16. Modeling the sustainable development of innovation in transport construction based on the communication approach

    Science.gov (United States)

    Revunova, Svetlana; Vlasenko, Vyacheslav; Bukreev, Anatoly

    2017-10-01

    The article proposes the models of innovative activity development, which is driven by the formation of “points of innovation-driven growth”. The models are based on the analysis of the current state and dynamics of innovative development of construction enterprises in the transport sector and take into account a number of essential organizational and economic changes in management. The authors substantiate implementing such development models as an organizational innovation that has a communication genesis. The use of the communication approach to the formation of “points of innovation-driven growth” allowed the authors to apply the mathematical tools of the graph theory in order to activate the innovative activity of the transport industry in the region. As a result, the authors have proposed models that allow constructing an optimal mechanism for the formation of “points of innovation-driven growth”.

  17. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  18. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  19. A dynamic texture-based approach to recognition of facial actions and their temporal models.

    Science.gov (United States)

    Koelstra, Sander; Pantic, Maja; Patras, Ioannis

    2010-11-01

    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.

  20. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  1. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  2. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  3. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Science.gov (United States)

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    Full Text Available Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are based on deterministic laws. These models extend spatially the static stability models adopted in geotechnical engineering, and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the operation of the existing models lays in the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of rainfall-induced shallow landslides. For this purpose, we have modified the transient rainfall infiltration and grid-based regional slope-stability analysis (TRIGRS code. The new code (TRIGRS-P adopts a probabilistic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs

  5. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  6. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  7. A model-based approach to identify binding sites in CLIP-Seq data.

    Directory of Open Access Journals (Sweden)

    Tao Wang

    Full Text Available Cross-linking immunoprecipitation coupled with high-throughput sequencing (CLIP-Seq has made it possible to identify the targeting sites of RNA-binding proteins in various cell culture systems and tissue types on a genome-wide scale. Here we present a novel model-based approach (MiClip to identify high-confidence protein-RNA binding sites from CLIP-seq datasets. This approach assigns a probability score for each potential binding site to help prioritize subsequent validation experiments. The MiClip algorithm has been tested in both HITS-CLIP and PAR-CLIP datasets. In the HITS-CLIP dataset, the signal/noise ratios of miRNA seed motif enrichment produced by the MiClip approach are between 17% and 301% higher than those by the ad hoc method for the top 10 most enriched miRNAs. In the PAR-CLIP dataset, the MiClip approach can identify ∼50% more validated binding targets than the original ad hoc method and two recently published methods. To facilitate the application of the algorithm, we have released an R package, MiClip (http://cran.r-project.org/web/packages/MiClip/index.html, and a public web-based graphical user interface software (http://galaxy.qbrc.org/tool_runner?tool_id=mi_clip for customized analysis.

  8. Rule-based approach to cognitive modeling of real-time decision making

    International Nuclear Information System (INIS)

    Thorndyke, P.W.

    1982-01-01

    Recent developments in the fields of cognitive science and artificial intelligence have made possible the creation of a new class of models of complex human behavior. These models, referred to as either expert or knowledge-based systems, describe the high-level cognitive processing undertaken by a skilled human to perform a complex, largely mental, task. Expert systems have been developed to provide simulations of skilled performance of a variety of tasks. These include problems of data interpretation, system monitoring and fault isolation, prediction, planning, diagnosis, and design. In general, such systems strive to produce prescriptive (error-free) behavior, rather than model descriptively the typical human's errorful behavior. However, some research has sought to develop descriptive models of human behavior using the same theoretical frameworks adopted by expert systems builders. This paper presents an overview of this theoretical framework and modeling approach, and indicates the applicability of such models to the development of a model of control room operators in a nuclear power plant. Such a model could serve several beneficial functions in plant design, licensing, and operation

  9. A new LPV modeling approach using PCA-based parameter set mapping to design a PSS

    Directory of Open Access Journals (Sweden)

    Mohammad B. Abolhasani Jabali

    2017-01-01

    Full Text Available This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV approach using parameter set mapping with principle component analysis (PCA. An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.

  10. A new LPV modeling approach using PCA-based parameter set mapping to design a PSS.

    Science.gov (United States)

    Jabali, Mohammad B Abolhasani; Kazemi, Mohammad H

    2017-01-01

    This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV) approach using parameter set mapping with principle component analysis (PCA). An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI) region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.

  11. Quality assurance of in-situ measurements of land surface albedo: A model-based approach

    Science.gov (United States)

    Adams, Jennifer; Gobron, Nadine; Widlowski, Jean-Luc; Mio, Corrado

    2016-04-01

    This paper presents the development of a model-based framework for assessing the quality of in-situ measurements of albedo used to validate land surface albedo products. Using a 3D Monte Carlo Ray Tracing (MCRT) radiative transfer model, a quality assurance framework is built based on simulated field measurements of albedo within complex 3D canopies and under various illumination scenarios. This method provides an unbiased approach in assessing the quality of field measurements, and is also able to trace the contributions of two main sources of uncertainty in field-measurements of albedo; those resulting from 1) the field measurement protocol, such as height or placement of field measurement within the canopy, and 2) intrinsic factors of the 3D canopy under specific illumination characteristics considered, such as the canopy structure and landscape heterogeneity, tree heights, ecosystem type and season.

  12. Relativistic three-body quark model of light baryons based on hypercentral approach

    Science.gov (United States)

    Aslanzadeh, M.; Rajabi, A. A.

    2015-05-01

    In this paper, we have treated the light baryons as a relativistic three-body bound system. Inspired by lattice QCD calculations, we treated baryons as a spin-independent three-quark system within a relativistic three-quark model based on the three-particle Klein-Gordon equation. We presented the analytical solution of three-body Klein-Gordon equation with employing the constituent quark model based on a hypercentral approach through which two- and three-body forces are taken into account. Herewith the average energy values of the up, down and strange quarks containing multiplets are reproduced. To describe the hyperfine structure of the baryon, the splittings within the SU(6)-multiplets are produced by the generalized Gürsey Radicati mass formula. The considered SU(6)-invariant potential is popular "Coulomb-plus-linear" potential and the strange and non-strange baryons spectra are in general well reproduced.

  13. Multisensory-Based Rehabilitation Approach: Translational Insights from Animal Models to Early Intervention

    Directory of Open Access Journals (Sweden)

    Giulia Purpura

    2017-07-01

    Full Text Available Multisensory processes permit combinations of several inputs, coming from different sensory systems, allowing for a coherent representation of biological events and facilitating adaptation to environment. For these reasons, their application in neurological and neuropsychological rehabilitation has been enhanced in the last decades. Recent studies on animals and human models have indicated that, on one hand multisensory integration matures gradually during post-natal life and development is closely linked to environment and experience and, on the other hand, that modality-specific information seems to do not benefit by redundancy across multiple sense modalities and is more readily perceived in unimodal than in multimodal stimulation. In this review, multisensory process development is analyzed, highlighting clinical effects in animal and human models of its manipulation for rehabilitation of sensory disorders. In addition, new methods of early intervention based on multisensory-based rehabilitation approach and their applications on different infant populations at risk of neurodevelopmental disabilities are discussed.

  14. An Appraisal Model Based on a Synthetic Feature Selection Approach for Students’ Academic Achievement

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2017-11-01

    Full Text Available Obtaining necessary information (and even extracting hidden messages from existing big data, and then transforming them into knowledge, is an important skill. Data mining technology has received increased attention in various fields in recent years because it can be used to find historical patterns and employ machine learning to aid in decision-making. When we find unexpected rules or patterns from the data, they are likely to be of high value. This paper proposes a synthetic feature selection approach (SFSA, which is combined with a support vector machine (SVM to extract patterns and find the key features that influence students’ academic achievement. For verifying the proposed model, two databases, namely, “Student Profile” and “Tutorship Record”, were collected from an elementary school in Taiwan, and were concatenated into an integrated dataset based on students’ names as a research dataset. The results indicate the following: (1 the accuracy of the proposed feature selection approach is better than that of the Minimum-Redundancy-Maximum-Relevance (mRMR approach; (2 the proposed model is better than the listing methods when the six least influential features have been deleted; and (3 the proposed model can enhance the accuracy and facilitate the interpretation of the pattern from a hybrid-type dataset of students’ academic achievement.

  15. A Framework of Vertebra Segmentation Using the Active Shape Model-Based Approach

    Directory of Open Access Journals (Sweden)

    Mohammed Benjelloun

    2011-01-01

    Full Text Available We propose a medical image segmentation approach based on the Active Shape Model theory. We apply this method for cervical vertebra detection. The main advantage of this approach is the application of a statistical model created after a training stage. Thus, the knowledge and interaction of the domain expert intervene in this approach. Our application allows the use of two different models, that is, a global one (with several vertebrae and a local one (with a single vertebra. Two modes of segmentation are also proposed: manual and semiautomatic. For the manual mode, only two points are selected by the user on a given image. The first point needs to be close to the lower anterior corner of the last vertebra and the second near the upper anterior corner of the first vertebra. These two points are required to initialize the segmentation process. We propose to use the Harris corner detector combined with three successive filters to carry out the semiautomatic process. The results obtained on a large set of X-ray images are very promising.

  16. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  17. Formal verification of dynamic hybrid systems: a NuSMV-based model checking approach

    Directory of Open Access Journals (Sweden)

    Xu Zhi

    2018-01-01

    Full Text Available Software security is an important and challenging research topic in developing dynamic hybrid embedded software systems. Ensuring the correct behavior of these systems is particularly difficult due to the interactions between the continuous subsystem and the discrete subsystem. Currently available security analysis methods for system risks have been limited, as they rely on manual inspections of the individual subsystems under simplifying assumptions. To improve this situation, a new approach is proposed that is based on the symbolic model checking tool NuSMV. A dual PID system is used as an example system, for which the logical part and the computational part of the system are modeled in a unified manner. Constraints are constructed on the controlled object, and a counter-example path is ultimately generated, indicating that the hybrid system can be analyzed by the model checking tool.

  18. Modeling the Ductile Brittle Fracture Transition in Reactor Pressure Vessel Steels using a Cohesive Zone Model based approach

    Energy Technology Data Exchange (ETDEWEB)

    Pritam Chakraborty; S. Bulent Biner

    2013-10-01

    Fracture properties of Reactor Pressure Vessel (RPV) steels show large variations with changes in temperature and irradiation levels. Brittle behavior is observed at lower temperatures and/or higher irradiation levels whereas ductile mode of failure is predominant at higher temperatures and/or lower irradiation levels. In addition to such temperature and radiation dependent fracture behavior, significant scatter in fracture toughness has also been observed. As a consequence of such variability in fracture behavior, accurate estimates of fracture properties of RPV steels are of utmost importance for safe and reliable operation of reactor pressure vessels. A cohesive zone based approach is being pursued in the present study where an attempt is made to obtain a unified law capturing both stable crack growth (ductile fracture) and unstable failure (cleavage fracture). The parameters of the constitutive model are dependent on both temperature and failure probability. The effect of irradiation has not been considered in the present study. The use of such a cohesive zone based approach would allow the modeling of explicit crack growth at both stable and unstable regimes of fracture. Also it would provide the possibility to incorporate more physical lower length scale models to predict DBT. Such a multi-scale approach would significantly improve the predictive capabilities of the model, which is still largely empirical.

  19. Modelling the ductile brittle fracture transition in reactor pressure vessel steels using a cohesive zone model based approach

    International Nuclear Information System (INIS)

    Chakraborty, Pritam; Bulent Biner, S.

    2015-01-01

    Fracture properties of Reactor Pressure Vessel (RPV) steels show large variations with changes in temperature and irradiation levels. Brittle behaviour is observed at lower temperatures and/or higher irradiation levels whereas ductile mode of failure is predominant at higher temperatures and/or lower irradiation levels. In addition to such temperature and radiation dependent fracture behaviour, significant scatter in fracture toughness has also been observed. As a consequence of such variability in fracture behaviour, accurate estimates of fracture properties of RPV steels are of utmost importance for safe and reliable operation of reactor pressure vessels. A cohesive zone based approach is being pursued in the present study where an attempt is made to obtain a unified law capturing both stable crack growth (ductile fracture) and unstable failure (cleavage fracture). The parameters of the constitutive model are dependent on both temperature and failure probability. The effect of irradiation has not been considered in the present study. The use of such a cohesive zone based approach would allow the modelling of explicit crack growth at both stable and unstable regimes of fracture. Also it would provide the possibility to incorporate more physical lower length scale models to predict DBT. Such a multi-scale approach would significantly improve the predictive capabilities of the model, which is still largely empirical. (authors)

  20. The distinction between heat and work: an approach based on a classical mechanical model

    CERN Document Server

    Besson, U

    2003-01-01

    The distinction between work and heat is obvious in most typical situations, but becomes difficult in certain critical cases. The subject is discussed in texts on thermodynamics and has long given rise to debate. This paper presents an approach based on a mesoscopic analysis, using a simple mechanical model, in which bodies are made up of particles (representing atoms and/or molecules) treated as material points interacting with forces that obey Newton's laws. The sum of the work done by these microscopic forces is split into two terms representing the macroscopic quantities work and heat.

  1. COMPETENCE-BASED APPROACH TO MODELLING STRUCTURES OF THE MAIN EDUCATIONAL PROGRAM

    Directory of Open Access Journals (Sweden)

    V. A. Gerasimova

    2015-01-01

    Full Text Available By the analysis results of scientific works in the field of competence-based approach in education authors proved need of computer support of the planning and development stage of the main educational program, they developed the main educational program structure automatic formation model on the graphs basis, offered the integrated criterion of an discipline assessment and developed a strategic map of a discipline complex assessment. The executed theoretical researches are a basis for creation of the main educational program planning and development support automated system.

  2. Estimating impacts of climate change policy on land use: an agent-based modelling approach.

    Science.gov (United States)

    Morgan, Fraser J; Daigneault, Adam J

    2015-01-01

    Agriculture is important to New Zealand's economy. Like other primary producers, New Zealand strives to increase agricultural output while maintaining environmental integrity. Utilising modelling to explore the economic, environmental and land use impacts of policy is critical to understand the likely effects on the sector. Key deficiencies within existing land use and land cover change models are the lack of heterogeneity in farmers and their behaviour, the role that social networks play in information transfer, and the abstraction of the global and regional economic aspects within local-scale approaches. To resolve these issues we developed the Agent-based Rural Land Use New Zealand model. The model utilises a partial equilibrium economic model and an agent-based decision-making framework to explore how the cumulative effects of individual farmer's decisions affect farm conversion and the resulting land use at a catchment scale. The model is intended to assist in the development of policy to shape agricultural land use intensification in New Zealand. We illustrate the model, by modelling the impact of a greenhouse gas price on farm-level land use, net revenue, and environmental indicators such as nutrient losses and soil erosion for key enterprises in the Hurunui and Waiau catchments of North Canterbury in New Zealand. Key results from the model show that farm net revenue is estimated to increase over time regardless of the greenhouse gas price. Net greenhouse gas emissions are estimated to decline over time, even under a no GHG price baseline, due to an expansion of forestry on low productivity land. Higher GHG prices provide a greater net reduction of emissions. While social and geographic network effects have minimal impact on net revenue and environmental outputs for the catchment, they do have an effect on the spatial arrangement of land use and in particular the clustering of enterprises.

  3. Developing scenarios to assess future landslide risks: a model-based approach applied to mountainous regions

    Science.gov (United States)

    Vacquie, Laure; Houet, Thomas

    2016-04-01

    In the last century, European mountain landscapes have experienced significant transformations. Natural and anthropogenic changes, climate changes, touristic and industrial development, socio-economic interactions, and their implications in terms of LUCC (land use and land cover changes) have directly influenced the spatial organization and vulnerability of mountain landscapes. This study is conducted as part of the SAMCO project founded by the French National Science Agency (ANR). It aims at developing a methodological approach, combining various tools, modelling platforms and methods, to identify vulnerable regions to landslide hazards accounting for futures LUCC. It presents an integrated approach combining participative scenarios and a LULC changes simulation models to assess the combined effects of LUCC and climate change on landslide risks in the Cauterets valley (French Pyrenees Mountains) up to 2100. Through vulnerability and risk mapping, the objective is to gather information to support landscape planning and implement land use strategies with local stakeholders for risk management. Four contrasting scenarios are developed and exhibit contrasting trajectories of socio-economic development. Prospective scenarios are based on national and international socio-economic contexts relying on existing assessment reports. The methodological approach integrates knowledge from local stakeholders to refine each scenario during their construction and to reinforce their plausibility and relevance by accounting for local specificities, e.g. logging and pastoral activities, touristic development, urban planning, etc. A process-based model, the Forecasting Scenarios for Mountains (ForeSceM) model, developed on the Dinamica Ego modelling platform is used to spatially allocate futures LUCC for each prospective scenario. Concurrently, a spatial decision support tool, i.e. the SYLVACCESS model, is used to identify accessible areas for forestry in scenario projecting logging

  4. Availability modeling approach for future circular colliders based on the LHC operation experience

    Directory of Open Access Journals (Sweden)

    Arto Niemi

    2016-12-01

    Full Text Available Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh and high luminosity LHC (HL-LHC requires a thorough understanding of today’s most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10–20  ab^{-1} of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. The approach is based on best-practice, industrially applied reliability analysis methods. It relies on failure rate and repair time distributions to calculate impacts on availability. The main source of information for the study is coming from CERN accelerator operation and maintenance data. Recent improvements in LHC failure tracking help improving the accuracy of modeling of LHC performance. The model accuracy and prediction capabilities are discussed by comparing obtained results with past LHC operational data.

  5. Availability modeling approach for future circular colliders based on the LHC operation experience

    Science.gov (United States)

    Niemi, Arto; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo

    2016-12-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today's most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10 - 20 ab-1 of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. The approach is based on best-practice, industrially applied reliability analysis methods. It relies on failure rate and repair time distributions to calculate impacts on availability. The main source of information for the study is coming from CERN accelerator operation and maintenance data. Recent improvements in LHC failure tracking help improving the accuracy of modeling of LHC performance. The model accuracy and prediction capabilities are discussed by comparing obtained results with past LHC operational data.

  6. A model-based approach to predict muscle synergies using optimization: application to feedback control

    Directory of Open Access Journals (Sweden)

    Reza eSharif Razavian

    2015-10-01

    Full Text Available This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e. they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems. This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  7. A model-based approach to predict muscle synergies using optimization: application to feedback control.

    Science.gov (United States)

    Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John

    2015-01-01

    This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  8. Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons

    Science.gov (United States)

    Chizhov, Anton V.; Graham, Lyle J.

    2007-01-01

    We propose a macroscopic approach toward realistic simulations of the population activity of hippocampal pyramidal neurons, based on the known refractory density equation with a different hazard function and on a different single-neuron threshold model. The threshold model is a conductance-based model taking into account adaptation-providing currents, which is reduced by omitting the fast sodium current and instead using an explicit threshold criterion for action potential events. Compared to the full pyramidal neuron model, the threshold model well approximates spike-time moments, postspike refractory states, and postsynaptic current integration. The dynamics of a neural population continuum are described by a set of one-dimensional partial differential equations in terms of the distributions of the refractory density (where the refractory state is defined by the time elapsed since the last action potential), the membrane potential, and the gating variables of the voltage-dependent channels, across the entire population. As the source term in the density equation, the probability density of firing, or hazard function, is derived from the Fokker-Planck (FP) equation, assuming that a single neuron is governed by a deterministic average-across-population input and a noise term. A self-similar solution of the FP equation in the subthreshold regime is obtained. Responses of the ensemble to stimulation by a current step and oscillating current are simulated and compared with individual neuron simulations. An example of interictal-like activity of a population of all-to-all connected excitatory neurons is presented.

  9. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  10. A graph-based aspect interference detection approach for UML-based aspect-oriented models

    NARCIS (Netherlands)

    Ciraci, S.; Havinga, W.K.; Aksit, Mehmet; Bockisch, Christoph; van den Broek, P.M.

    2009-01-01

    Aspect Oriented Modeling (AOM) techniques facilitate separate modeling of concerns and allow for a more flexible composition of these than traditional modeling techniques. While this improves the understandability of each submodel, in order to reason about the behavior of the composed system and to

  11. The Effects of a Model-Based Physics Curriculum Program with a Physics First Approach: A Causal-Comparative Study

    Science.gov (United States)

    Liang, Ling L.; Fulmer, Gavin W.; Majerich, David M.; Clevenstine, Richard; Howanski, Raymond

    2012-01-01

    The purpose of this study is to examine the effects of a model-based introductory physics curriculum on conceptual learning in a Physics First (PF) Initiative. This is the first comparative study in physics education that applies the Rasch modeling approach to examine the effects of a model-based curriculum program combined with PF in the United…

  12. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters

  13. Loss Modeling with a Data-Driven Approach in Event-Based Rainfall-Runoff Analysis

    Science.gov (United States)

    Chua, L. H. C.

    2012-04-01

    Mathematical models require the estimation of rainfall abstractions for accurate predictions of runoff. Although loss models such as the constant loss and exponential loss models are commonly used, these methods are based on simplified assumptions of the physical process. A new approach based on the data driven paradigm to estimate rainfall abstractions is proposed in this paper. The proposed data driven model, based on the artificial neural network (ANN) does not make any assumptions on the loss behavior. The estimated discharge from a physically-based model, obtained from the kinematic wave (KW) model assuming zero losses, was used as the only input to the ANN. The output is the measured discharge. Thus, the ANN functions as a black-box loss model. Two sets of data were analyzed for this study. The first dataset consists of rainfall and runoff data, measured from an artificial catchment (area = 25 m2) comprising two overland planes (slope = 11%), 25m long, transversely inclined towards a rectangular channel (slope = 2%) which conveyed the flow, recorded using calibrated weigh tanks, to the outlet. Two rain gauges, each placed 6.25 m from either ends of the channel, were used to record rainfall. Data for six storm events over the period between October 2002 and December 2002 were analyzed. The second dataset was obtained from the Upper Bukit Timah catchment (area = 6.4 km2) instrumented with two rain gauges and a flow measuring station. A total of six events recorded between November 1987 and July 1988 were selected for this study. The runoff predicted by the ANN was compared with the measured runoff. In addition, results from KW models developed for both the catchments were used as a benchmark. The KW models were calibrated assuming the loss rate for an average event for each of the datasets. The results from both the ANN and KW models agreed well with the runoff measured from the artificial catchment. The KW model is expected to perform well since the catchment

  14. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  15. A Tree-based Approach for Modelling Interception Loss From Evergreen Oak Mediterranean Savannas

    Science.gov (United States)

    Pereira, Fernando L.; Gash, John H. C.; David, Jorge S.; David, Teresa S.; Monteiro, Paulo R.; Valente, Fernanda

    2010-05-01

    woodlands in southern Portugal. For both sites, simulated interception loss agreed well with the observations indicating the adequacy of this new methodology for modelling interception loss by isolated trees in savanna-type ecosystems. Furthermore, the proposed approach is physically based and requires only a limited amount of data. Interception loss for the entire forest can be estimated by scaling up the evaporation from individual trees accounting for the number of trees per unit area.

  16. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  17. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  18. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification

    Science.gov (United States)

    Sager, Jennifer E.; Yu, Jingjing; Ragueneau-Majlessi, Isabelle

    2015-01-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms “PBPK” and “physiologically based pharmacokinetic model” to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. PMID:26296709

  19. A model-based approach for sustainability and value assessment in the aerospace value chain

    Directory of Open Access Journals (Sweden)

    Marco Bertoni

    2015-06-01

    Full Text Available In the aerospace industry, systems engineering practices have been exercised for years, as a way to turn high-level design objectives into concrete targets on system functionality (e.g. range, noise, and reliability. More difficult is to decompose and clarify sustainability implications in the same way and to compare them against performance-related capabilities already during preliminary design. This article addresses the problem of bringing the important—yet typically high level and complex—sustainability aspects into engineering practices. It proposes a novel integrated model-based method that provides a consistent way of addressing the well-known lack of generic and integrated ways of clarifying both cost and value consequences of sustainability in early phases. It further presents the development and implementation of such approach in two separate case studies conducted in collaboration with a major aero-engine sub-system manufacturer. The first case concerns the assessment of alternative business configurations to maintain scarce materials in closed loops, while the second one concerns the production technology of an aero-engine component. Eventually, this article highlights the learning generated by the development and implementation of these approaches and discusses opportunities for further development of model-based support.

  20. Model-based system engineering approach for the Euclid mission to manage scientific and technical complexity

    Science.gov (United States)

    Lorenzo Alvarez, Jose; Metselaar, Harold; Amiaux, Jerome; Saavedra Criado, Gonzalo; Gaspar Venancio, Luis M.; Salvignol, Jean-Christophe; Laureijs, René J.; Vavrek, Roland

    2016-08-01

    In the last years, the system engineering field is coming to terms with a paradigm change in the approach for complexity management. Different strategies have been proposed to cope with highly interrelated systems, system of systems and collaborative system engineering have been proposed and a significant effort is being invested into standardization and ontology definition. In particular, Model Based System Engineering (MBSE) intends to introduce methodologies for a systematic system definition, development, validation, deployment, operation and decommission, based on logical and visual relationship mapping, rather than traditional 'document based' information management. The practical implementation in real large-scale projects is not uniform across fields. In space science missions, the usage has been limited to subsystems or sample projects with modeling being performed 'a-posteriori' in many instances. The main hurdle for the introduction of MBSE practices in new projects is still the difficulty to demonstrate their added value to a project and whether their benefit is commensurate with the level of effort required to put them in place. In this paper we present the implemented Euclid system modeling activities, and an analysis of the benefits and limitations identified to support in particular requirement break-down and allocation, and verification planning at mission level.

  1. State space model-based trust evaluation over wireless sensor networks: an iterative particle filter approach

    Directory of Open Access Journals (Sweden)

    Bin Liu

    2017-03-01

    Full Text Available In this study, the authors propose a state space modelling approach for trust evaluation in wireless sensor networks. In their state space trust model (SSTM, each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, they translate the trust evaluation problem to be a non-linear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter (PF, and thus the resulting algorithm is termed as iterative PF (IPF. The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high-dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis.

  2. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Daude, F. [EDF R& D, AMA, 7 boulevard Gaspard Monge, 91120 Palaiseau (France); IMSIA, UMR EDF-CNRS-CEA-ENSTA 9219, Université Paris Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau (France); Koren, B.; Tijsseling, A.S. [Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2017-02-15

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splitting approach. The results are in good agreement with reference results and exact solutions.

  3. Integration of Bioreactor and Membrane Separation Processes: A Model Based Approach

    DEFF Research Database (Denmark)

    Prado Rubio, Oscar Andres

    test. Satisfactory results are obtained regulating the pH and managing the input constraints. The design and operability of the integrated bioreactor and REED module are investigated using the developed models and control structure. The study involves two different case studies: continuous lactic acid......This work is motivated by the need for tighter integration of industrial processes in an attempt to improve process sustainability. To this end, this work considers a interesting case study around which different systematic approaches are used or developed to achieve the above goal. The thesis...... is concerned with the understanding of an integrated bioreactor and electrically driven membrane separation processes for lactic acid fermentation. This is achieved through a model based investigation of the individual units and the integrated system. Development of system understanding is the key to reveal...

  4. Using the Dynamic Model to develop an evidence-based and theory-driven approach to school improvement

    NARCIS (Netherlands)

    Creemers, B.P.M.; Kyriakides, L.

    2010-01-01

    This paper refers to a dynamic perspective of educational effectiveness and improvement stressing the importance of using an evidence-based and theory-driven approach. Specifically, an approach to school improvement based on the dynamic model of educational effectiveness is offered. The recommended

  5. Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach

    Directory of Open Access Journals (Sweden)

    Joeri Hofmans

    2017-11-01

    Full Text Available A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories—in the form of the dynamic model of the psychological contract—and research methods—in the form of daily diary research and experience sampling research—are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models—the Zero-Inflated model and the Hurdle model—that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue.

  6. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  7. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  8. Inverse modeling of hydraulic tests in fractured crystalline rock based on a transition probability geostatistical approach

    Science.gov (United States)

    Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel

    2011-12-01

    This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.

  9. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach

    Science.gov (United States)

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research. PMID:27706185

  10. Modeling cell adhesion and proliferation: a cellular-automata based approach.

    Science.gov (United States)

    Vivas, J; Garzón-Alvarado, D; Cerrolaza, M

    Cell adhesion is a process that involves the interaction between the cell membrane and another surface, either a cell or a substrate. Unlike experimental tests, computer models can simulate processes and study the result of experiments in a shorter time and lower costs. One of the tools used to simulate biological processes is the cellular automata, which is a dynamic system that is discrete both in space and time. This work describes a computer model based on cellular automata for the adhesion process and cell proliferation to predict the behavior of a cell population in suspension and adhered to a substrate. The values of the simulated system were obtained through experimental tests on fibroblast monolayer cultures. The results allow us to estimate the cells settling time in culture as well as the adhesion and proliferation time. The change in the cells morphology as the adhesion over the contact surface progress was also observed. The formation of the initial link between cell and the substrate of the adhesion was observed after 100 min where the cell on the substrate retains its spherical morphology during the simulation. The cellular automata model developed is, however, a simplified representation of the steps in the adhesion process and the subsequent proliferation. A combined framework of experimental and computational simulation based on cellular automata was proposed to represent the fibroblast adhesion on substrates and changes in a macro-scale observed in the cell during the adhesion process. The approach showed to be simple and efficient.

  11. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    Science.gov (United States)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  12. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  13. Quantifying unpredictability: A multiple-model approach based on satellite imagery data from Mediterranean ponds.

    Directory of Open Access Journals (Sweden)

    Lluis Franch-Gras

    Full Text Available Fluctuations in environmental parameters are increasingly being recognized as essential features of any habitat. The quantification of whether environmental fluctuations are prevalently predictable or unpredictable is remarkably relevant to understanding the evolutionary responses of organisms. However, when characterizing the relevant features of natural habitats, ecologists typically face two problems: (1 gathering long-term data and (2 handling the hard-won data. This paper takes advantage of the free access to long-term recordings of remote sensing data (27 years, Landsat TM/ETM+ to assess a set of environmental models for estimating environmental predictability. The case study included 20 Mediterranean saline ponds and lakes, and the focal variable was the water-surface area. This study first aimed to produce a method for accurately estimating the water-surface area from satellite images. Saline ponds can develop salt-crusted areas that make it difficult to distinguish between soil and water. This challenge was addressed using a novel pipeline that combines band ratio water indices and the short near-infrared band as a salt filter. The study then extracted the predictable and unpredictable components of variation in the water-surface area. Two different approaches, each showing variations in the parameters, were used to obtain the stochastic variation around a regular pattern with the objective of dissecting the effect of assumptions on predictability estimations. The first approach, which is based on Colwell's predictability metrics, transforms the focal variable into a nominal one. The resulting discrete categories define the relevant variations in the water-surface area. In the second approach, we introduced General Additive Model (GAM fitting as a new metric for quantifying predictability. Both approaches produced a wide range of predictability for the studied ponds. Some model assumptions-which are considered very different a priori

  14. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. In contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.

  15. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  16. Problem-Based Learning Model Used to Scientific Approach Based Worksheet for Physics to Develop Senior High School Students Characters

    Science.gov (United States)

    Yulianti, D.

    2017-04-01

    The purpose of this study is to explore the application of Problem Based Learning(PBL) model aided withscientific approach and character integrated physics worksheets (LKS). Another purpose is to investigate the increase in cognitive and psychomotor learning outcomes and to know the character development of students. The method used in this study was the quasi-experiment. The instruments were observation and cognitive test. Worksheets can improve students’ cognitive, psychomotor learning outcomes. Improvements in cognitive learning results of students who have learned using worksheets are higher than students who received learning without worksheets. LKS can also develop the students’ character.

  17. APPROACH TO SYNTHESIS OF PASSIVE INFRARED DETECTORS BASED ON QUASI-POINT MODEL OF QUALIFIED INTRUDER

    Directory of Open Access Journals (Sweden)

    I. V. Bilizhenko

    2017-01-01

    Full Text Available Subject of Research. The paper deals with synthesis of passive infra red (PIR detectors with enhanced detection capability of qualified intruder who uses different types of detection countermeasures: the choice of specific movement direction and disguise in infrared band. Methods. We propose an approach based on quasi-point model of qualified intruder. It includes: separation of model priority parameters, formation of partial detection patterns adapted to those parameters and multi channel signal processing. Main Results. Quasi-pointmodel of qualified intruder consisting of different fragments was suggested. Power density difference was used for model parameters estimation. Criteria were formulated for detection pattern parameters choice on the basis of model parameters. Pyroelectric sensor with nine sensitive elements was applied for increasing the signal information content. Multi-channel processing with multiple partial detection patterns was proposed optimized for detection of intruder's specific movement direction. Practical Relevance. Developed functional device diagram can be realized both by hardware and software and is applicable as one of detection channels for dual technology passive infrared and microwave detectors.

  18. Energy demand projections based on an uncertain dynamic system modeling approach

    International Nuclear Information System (INIS)

    Dong, S.

    2000-01-01

    Today, China has become the world's second largest pollution source of CO 2 . Owing to coal-based energy consumption, it is estimated that 85--90% of the SO 2 and CO 2 emission of China results from coal use. With high economic growth and increasing environmental concerns, China's energy consumption in the next few decades has become an issue of active concern. Forecasting of energy demand over long periods, however, is getting more complex and uncertain. It is believed that the economic and energy systems are chaotic and nonlinear. Traditional linear system modeling, used mostly in energy demand forecasts, therefore, is not a useful approach. In view of uncertainty and imperfect information about future economic growth and energy development, an uncertain dynamic system model, which has the ability to incorporate and absorb the nature of an uncertain system with imperfect or incomplete information, is developed. Using the model, the forecasting of energy demand in the next 25 years is provided. The model predicts that China's energy demand in 2020 will be about 2,700--3,000 Mtce, coal demand 3,500 Mt, increasing by 128% and 154%, respectively, compared with that of 1995

  19. Forward and Reverse Process Models for the Squeeze Casting Process Using Neural Network Based Approaches

    Directory of Open Access Journals (Sweden)

    Manjunath Patel Gowdru Chandrashekarappa

    2014-01-01

    Full Text Available The present research work is focussed to develop an intelligent system to establish the input-output relationship utilizing forward and reverse mappings of artificial neural networks. Forward mapping aims at predicting the density and secondary dendrite arm spacing (SDAS from the known set of squeeze cast process parameters such as time delay, pressure duration, squeezes pressure, pouring temperature, and die temperature. An attempt is also made to meet the industrial requirements of developing the reverse model to predict the recommended squeeze cast parameters for the desired density and SDAS. Two different neural network based approaches have been proposed to carry out the said task, namely, back propagation neural network (BPNN and genetic algorithm neural network (GA-NN. The batch mode of training is employed for both supervised learning networks and requires huge training data. The requirement of huge training data is generated artificially at random using regression equation derived through real experiments carried out earlier by the same authors. The performances of BPNN and GA-NN models are compared among themselves with those of regression for ten test cases. The results show that both models are capable of making better predictions and the models can be effectively used in shop floor in selection of most influential parameters for the desired outputs.

  20. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  1. Towards a model-based development approach for wireless sensor-actuator network protocols

    DEFF Research Database (Denmark)

    Kumar S., A. Ajith; Simonsen, Kent Inge

    2014-01-01

    Model-Driven Software Engineering (MDSE) is a promising approach for the development of applications, and has been well adopted in the embedded applications domain in recent years. Wireless Sensor Actuator Networks consisting of resource constrained hardware and platformspecific operating system...... induced due to manual translations. With the use of formal semantics in the modeling approach, we can further ensure the correctness of the source model by means of verification. Also, with the use of network simulators and formal modeling tools, we obtain a verified and validated model to be used...

  2. A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob

    2006-01-01

    We propose a new approach to developing a tractable affective dialogue model for general probabilistic frame-based dialogue systems. The dialogue model, based on the Partially Observable Markov Decision Process (POMDP) and the Dynamic Decision Network (DDN) techniques, is composed of two main parts,

  3. A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Traum, D.; Alexandersson, J.; Jonsson, A.; Zukerman, I.

    2007-01-01

    We propose a new approach to developing a tractable affective dialogue model for general probabilistic frame-based dialogue systems. The dialogue model, based on the Partially Observable Markov Decision Process (POMDP) and the Dynamic Decision Network (DDN) techniques, is composed of two main parts,

  4. A Computational Analysis of Psychopathy Based on a Network-Oriented Modeling Approach

    NARCIS (Netherlands)

    van Dijk, Freke; Treur, J.

    2018-01-01

    In this paper a way to analyse psychopathy computationally is explored. This is done by creating and analysing a temporal-causal network model using a Network-Oriented Modeling approach. The network model was designed using knowledge from the field of Cognitive and Social Neuroscience and simulates

  5. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based

  6. GIS based site and structure selection model for groundwater recharge: a hydrogeomorphic approach.

    Science.gov (United States)

    Vijay, Ritesh; Sohony, R A

    2009-10-01

    The groundwater in India is facing a critical situation due to over exploitation, reduction in recharge potential by change in land use and land cover and improper planning and management. A groundwater development plan needs a large volume of multidisciplinary data from various sources. A geographic information system (GIS) based hydrogeomorphic approach can provide the appropriate platform for spatial analysis of diverse data sets for decision making in groundwater recharge. The paper presents development of GIS based model to provide more accuracy in identification and suitability analysis for finding out zones and locating suitable sites with suggested structures for artificial recharge. Satellite images were used to prepare the geomorphological and land use maps. For site selection, the items such as slope, surface infiltration, and order of drainage were generated and integrated in GIS using Weighted Index Overlay Analysis and Boolean logics. Similarly for identification of suitable structures, complex matrix was programmed based on local climatic, topographic, hydrogeologic and landuse conditions as per artificial recharge manual of Central Ground Water Board, India. The GIS based algorithm is implemented in a user-friendly way using arc macro language on Arc/Info platform.

  7. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  8. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    Science.gov (United States)

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  9. Bayesian model-based approach for developing a river water quality index

    Science.gov (United States)

    Ali, Zalina Mohd; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan

    2014-09-01

    Six main pollutants have been previously identified by expert opinion to determine river condition in Malaysia. The pollutants were Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), Suspended Solid (SS), potential of Hydrogen (pH) and Ammonia (AN). The selected variables together with the respective weights have been applied to calculate the water quality index of all rivers in Malaysia. However, the relative weights established in DOE-WQI formula are subjective in nature and not unanimously agreed upon, as indicated by different weight being proposed for the same variables by various panels of experts. Focusing on the Langat River, a Bayesian model-based approach was introduced for the first time in this study to obtain new objective relative weights. The new weights used in WQI calculation are shown to be capable of capturing similar distributions in water quality compared with the existing DOE-WQI.

  10. A Hybrid Sensitivity Analysis Approach for Agent-based Disease Spread Models

    Energy Technology Data Exchange (ETDEWEB)

    Pullum, Laura L [ORNL; Cui, Xiaohui [New York Institute of Technology (NYIT)

    2012-01-01

    Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. Of particular interest lately is the application of agent-based and hybrid models to epidemiology, specifically Agent-based Disease Spread Models (ABDSM). Validation (one aspect of the means to achieve dependability) of ABDSM simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. In this report, we describe our preliminary efforts in ABDSM validation by using hybrid model fusion technology.

  11. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health.

    Science.gov (United States)

    Zafra-Cabeza, Ascensión; Rivera, Daniel E; Collins, Linda M; Ridao, Miguel A; Camacho, Eduardo F

    2011-07-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm.

  12. Another Look at the Relationship Between Accident- and Encroachment-Based Approaches to Run-Off-the-Road Accidents Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Miaou, Shaw-Pin

    1997-08-01

    The purpose of this study was to look for ways to combine the strengths of both approaches in roadside safety research. The specific objectives were (1) to present the encroachment-based approach in a more systematic and coherent way so that its limitations and strengths can be better understood from both statistical and engineering standpoints, and (2) to apply the analytical and engineering strengths of the encroachment-based thinking to the formulation of mean functions in accident-based models.

  13. Green roof rainfall-runoff modelling: is the comparison between conceptual and physically based approaches relevant?

    Science.gov (United States)

    Versini, Pierre-Antoine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    Green roofs are commonly considered as efficient tools to mitigate urban runoff as they can store precipitation, and consequently provide retention and detention performances. Designed as a compromise between water holding capacity, weight and hydraulic conductivity, their substrate is usually an artificial media differentiating significantly from a traditional soil. In order to assess green roofs hydrological performances, many models have been developed. Classified into two categories (conceptual and physically based), they are usually applied to reproduce the discharge of a particular monitored green roof considered as homogeneous. Although the resulted simulations could be satisfactory, the question of robustness and consistency of the calibrated parameters is often not addressed. Here, a modeling framework has been developed to assess the efficiency and the robustness of both modelling approaches (conceptual and physically based) in reproducing green roof hydrological behaviour. SWMM and VS2DT models have been used for this purpose. This work also benefits from an experimental setup where several green roofs differentiated by their substrate thickness and vegetation cover are monitored. Based on the data collected for several rainfall events, it has been studied how the calibrated parameters are effectively linked to their physical properties and how they can vary from one green roof configuration to another. Although both models reproduce correctly the observed discharges in most of the cases, their calibrated parameters exhibit a high inconsistency. For a same green roof configuration, these parameters can vary significantly from one rainfall event to another, even if they are supposed to be linked to the green roof characteristics (roughness, residual moisture content for instance). They can also be different from one green roof configuration to another although the implemented substrate is the same. Finally, it appears very difficult to find any

  14. The Quest for Evidence for Proton Therapy: Model-Based Approach and Precision Medicine

    Energy Technology Data Exchange (ETDEWEB)

    Widder, Joachim, E-mail: j.widder@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Lambin, Philippe [Department of Radiation Oncology, School for Oncology and Developmental Biology (GROW), Maastricht University Medical Center, Maastricht (Netherlands); Marijnen, Corrie A.M. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Pignol, Jean-Philippe [Department of Radiation Oncology, Erasmus Medical Center Cancer Institute, Rotterdam (Netherlands); Rasch, Coen R. [Department of Radiation Oncology, Academic Medical Center, Amsterdam (Netherlands); Slotman, Ben J. [Department of Radiation Oncology, VU Medical Center, Amsterdam (Netherlands); Verheij, Marcel [Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam (Netherlands); Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2016-05-01

    Purpose: Reducing dose to normal tissues is the advantage of protons versus photons. We aimed to describe a method for translating this reduction into a clinically relevant benefit. Methods and Materials: Dutch scientific and health care governance bodies have recently issued landmark reports regarding generation of relevant evidence for new technologies in health care including proton therapy. An approach based on normal tissue complication probability (NTCP) models has been adopted to select patients who are most likely to experience fewer (serious) adverse events achievable by state-of-the-art proton treatment. Results: By analogy with biologically targeted therapies, the technology needs to be tested in enriched cohorts of patients exhibiting the decisive predictive marker: difference in normal tissue dosimetric signatures between proton and photon treatment plans. Expected clinical benefit is then estimated by virtue of multifactorial NTCP models. In this sense, high-tech radiation therapy falls under precision medicine. As a consequence, randomizing nonenriched populations between photons and protons is predictably inefficient and likely to produce confusing results. Conclusions: Validating NTCP models in appropriately composed cohorts treated with protons should be the primary research agenda leading to urgently needed evidence for proton therapy.

  15. Risk assessment of groundwater contamination: a multilevel fuzzy comprehensive evaluation approach based on DRASTIC model.

    Science.gov (United States)

    Zhang, Qiuwen; Yang, Xiaohong; Zhang, Yan; Zhong, Ming

    2013-01-01

    Groundwater contamination is a serious threat to water supply. Risk assessment of groundwater contamination is an effective way to protect the safety of groundwater resource. Groundwater is a complex and fuzzy system with many uncertainties, which is impacted by different geological and hydrological factors. In order to deal with the uncertainty in the risk assessment of groundwater contamination, we propose an approach with analysis hierarchy process and fuzzy comprehensive evaluation integrated together. Firstly, the risk factors of groundwater contamination are identified by the sources-pathway-receptor-consequence method, and a corresponding index system of risk assessment based on DRASTIC model is established. Due to the complexity in the process of transitions between the possible pollution risks and the uncertainties of factors, the method of analysis hierarchy process is applied to determine the weights of each factor, and the fuzzy sets theory is adopted to calculate the membership degrees of each factor. Finally, a case study is presented to illustrate and test this methodology. It is concluded that the proposed approach integrates the advantages of both analysis hierarchy process and fuzzy comprehensive evaluation, which provides a more flexible and reliable way to deal with the linguistic uncertainty and mechanism uncertainty in groundwater contamination without losing important information.

  16. A genetic-based neuro-fuzzy approach for modeling and control of dynamical systems.

    Science.gov (United States)

    Farag, W A; Quintana, V H; Lambert-Torres, G

    1998-01-01

    Linguistic modeling of complex irregular systems constitutes the heart of many control and decision making systems, and fuzzy logic represents one of the most effective algorithms to build such linguistic models. In this paper, a linguistic (qualitative) modeling approach is proposed. The approach combines the merits of the fuzzy logic theory, neural networks, and genetic algorithms (GA's). The proposed model is presented in a fuzzy-neural network (FNN) form which can handle both quantitative (numerical) and qualitative (linguistic) knowledge. The learning algorithm of an FNN is composed of three phases. The first phase is used to find the initial membership functions of the fuzzy model. In the second phase, a new algorithm is developed and used to extract the linguistic-fuzzy rules. In the third phase, a multiresolutional dynamic genetic algorithm (MRD-GA) is proposed and used for optimized tuning of membership functions of the proposed model. Two well-known benchmarks are used to evaluate the performance of the proposed modeling approach, and compare it with other modeling approaches.

  17. Prediction of paraquat exposure and toxicity in clinically ill poisoned patients: a model based approach.

    Science.gov (United States)

    Wunnapuk, Klintean; Mohammed, Fahim; Gawarammana, Indika; Liu, Xin; Verbeeck, Roger K; Buckley, Nicholas A; Roberts, Michael S; Musuamba, Flora T

    2014-10-01

    Paraquat poisoning is a medical problem in many parts of Asia and the Pacific. The mortality rate is extremely high as there is no effective treatment. We analyzed data collected during an ongoing cohort study on self-poisoning and from a randomized controlled trial assessing the efficacy of immunosuppressive therapy in hospitalized paraquat-intoxicated patients. The aim of this analysis was to characterize the toxicokinetics and toxicodynamics of paraquat in this population. A non-linear mixed effects approach was used to perform a toxicokinetic/toxicodynamic population analysis in a cohort of 78 patients. The paraquat plasma concentrations were best fitted by a two compartment toxicokinetic structural model with first order absorption and first order elimination. Changes in renal function were used for the assessment of paraquat toxicodynamics. The estimates of toxicokinetic parameters for the apparent clearance, the apparent volume of distribution and elimination half-life were 1.17 l h(-1) , 2.4 l kg(-1) and 87 h, respectively. Renal function, namely creatinine clearance, was the most significant covariate to explain between patient variability in paraquat clearance.This model suggested that a reduction in paraquat clearance occurred within 24 to 48 h after poison ingestion, and afterwards the clearance was constant over time. The model estimated that a paraquat concentration of 429 μg l(-1) caused 50% of maximum renal toxicity. The immunosuppressive therapy tested during this study was associated with only 8% improvement of renal function. The developed models may be useful as prognostic tools to predict patient outcome based on patient characteristics on admission and to assess drug effectiveness during antidote drug development. © 2014 The British Pharmacological Society.

  18. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  19. The Methodical Approach to Assessment of Enterprise Activity on the Basis of its Models, Based on the Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Minenkova Olena V.

    2017-12-01

    Full Text Available The article proposes the methodical approach to assessment of activity of enterprise on the basis of its models, based on the balanced scorecard. The content is presented and the following components of the methodical approach are formed: tasks, input information, list of methods and models, as well as results. Implementation of this methodical approach provides improvement of management and increase of results of enterprise activity. The place of assessment models in management of enterprise activity and formation of managerial decision has been defined. Recommendations as to the operations of decision-making procedures to increase the efficiency of enterprise have been provided.

  20. A NURBS-based finite element model applied to geometrically nonlinear elastodynamics using a corotational approach

    KAUST Repository

    Espath, L. F R

    2015-02-03

    A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.The generalized-α method (Gα) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+ξ) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.The main contribution is to show that the energy-momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+ξ with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices.

  1. Comparing clustering models in bank customers: Based on Fuzzy relational clustering approach

    Directory of Open Access Journals (Sweden)

    Ayad Hendalianpour

    2016-11-01

    Full Text Available Clustering is absolutely useful information to explore data structures and has been employed in many places. It organizes a set of objects into similar groups called clusters, and the objects within one cluster are both highly similar and dissimilar with the objects in other clusters. The K-mean, C-mean, Fuzzy C-mean and Kernel K-mean algorithms are the most popular clustering algorithms for their easy implementation and fast work, but in some cases we cannot use these algorithms. Regarding this, in this paper, a hybrid model for customer clustering is presented that is applicable in five banks of Fars Province, Shiraz, Iran. In this way, the fuzzy relation among customers is defined by using their features described in linguistic and quantitative variables. As follows, the customers of banks are grouped according to K-mean, C-mean, Fuzzy C-mean and Kernel K-mean algorithms and the proposed Fuzzy Relation Clustering (FRC algorithm. The aim of this paper is to show how to choose the best clustering algorithms based on density-based clustering and present a new clustering algorithm for both crisp and fuzzy variables. Finally, we apply the proposed approach to five datasets of customer's segmentation in banks. The result of the FCR shows the accuracy and high performance of FRC compared other clustering methods.

  2. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  3. Transaction based approach

    Science.gov (United States)

    Hunka, Frantisek; Matula, Jiri

    2017-07-01

    Transaction based approach is utilized in some methodologies in business process modeling. Essential parts of these transactions are human beings. The notion of agent or actor role is usually used for them. The paper on a particular example describes possibilities of Design Engineering Methodology for Organizations (DEMO) and Resource-Event-Agent (REA) methodology. Whereas the DEMO methodology can be regarded as a generic methodology having its foundation in the theory of Enterprise Ontology the REA methodology is regarded as the domain specific methodology and has its origin in accountancy systems. The results of these approaches is that the DEMO methodology captures everything that happens in the reality with a good empirical evidence whereas the REA methodology captures only changes connected with economic events. Economic events represent either change of the property rights to economic resource or consumption or production of economic resources. This results from the essence of economic events and their connection to economic resources.

  4. An Agent-Based Modeling Approach to Investigate Emergent Patterns in Ecological Systems

    NARCIS (Netherlands)

    Ellers, Jacintha; Hoogendoorn, Mark; Wendt, David

    2010-01-01

    Due to its suitability, agent-based modeling is more and more often being applied in the domain of Ecology. The development of an agent-based model for the Ecological domain is however very difficult. Usually, many parameters need to be set to appropriate values and such values are not always

  5. Model-based versus specific dosimetry in diagnostic context: Comparison of three dosimetric approaches

    Energy Technology Data Exchange (ETDEWEB)

    Marcatili, S., E-mail: sara.marcatili@inserm.fr; Villoing, D.; Mauxion, T.; Bardiès, M. [Inserm, UMR1037 CRCT, Toulouse F-31000, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, Toulouse F-31000 (France); McParland, B. J. [Imaging Technology Group, GE Healthcare, Life Sciences, B22U The Grove Centre, White Lion Road, Amersham, England HP7 9LL (United Kingdom)

    2015-03-15

    Purpose: The dosimetric assessment of novel radiotracers represents a legal requirement in most countries. While the techniques for the computation of internal absorbed dose in a therapeutic context have made huge progresses in recent years, in a diagnostic scenario the absorbed dose is usually extracted from model-based lookup tables, most often derived from International Commission on Radiological Protection (ICRP) or Medical Internal Radiation Dose (MIRD) Committee models. The level of approximation introduced by these models may impact the resulting dosimetry. The aim of this work is to establish whether a more refined approach to dosimetry can be implemented in nuclear medicine diagnostics, by analyzing a specific case. Methods: The authors calculated absorbed doses to various organs in six healthy volunteers administered with flutemetamol ({sup 18}F) injection. Each patient underwent from 8 to 10 whole body 3D PET/CT scans. This dataset was analyzed using a Monte Carlo (MC) application developed in-house using the toolkit GATE that is capable to take into account patient-specific anatomy and radiotracer distribution at the voxel level. They compared the absorbed doses obtained with GATE to those calculated with two commercially available software: OLINDA/EXM and STRATOS implementing a dose voxel kernel convolution approach. Results: Absorbed doses calculated with GATE were higher than those calculated with OLINDA. The average ratio between GATE absorbed doses and OLINDA’s was 1.38 ± 0.34 σ (from 0.93 to 2.23). The discrepancy was particularly high for the thyroid, with an average GATE/OLINDA ratio of 1.97 ± 0.83 σ for the six patients. Differences between STRATOS and GATE were found to be higher. The average ratio between GATE and STRATOS absorbed doses was 2.51 ± 1.21 σ (from 1.09 to 6.06). Conclusions: This study demonstrates how the choice of the absorbed dose calculation algorithm may introduce a bias when gamma radiations are of importance, as is

  6. Fuzzy-hybrid land vehicle driveline modelling based on a moving window subtractive clustering approach

    Science.gov (United States)

    Economou, J. T.; Knowles, K.; Tsourdos, A.; White, B. A.

    2011-02-01

    In this article, the fuzzy-hybrid modelling (FHM) approach is used and compared to the input-output system Takagi-Sugeno (TS) modelling approach which correlates the drivetrain power flow equations with the vehicle dynamics. The output power relations were related to the drivetrain bounded efficiencies and also to the wheel slips. The model relates also to the wheel and ground interactions via suitable friction coefficient models relative to the wheel slip profiles. The wheel slip had a significant efficiency contribution to the overall driveline system efficiency. The peak friction slip and peak coefficient of friction values are known a priori during the analysis. Lastly, the rigid body dynamical power has been verified through both simulation and experimental results. The mathematical analysis has been supported throughout the paper via experimental data for a specific electric robotic vehicle. The identification of the localised and input-output TS models for the fuzzy hybrid and the experimental data were obtained utilising the subtractive clustering (SC) methodology. These results were also compared to a real-time TS SC approach operating on periodic time windows. This article concludes with the benefits of the real-time FHM method for the vehicle electric driveline due to the advantage of both the analytical TS sub-model and the physical system modelling for the remaining process which can be clearly utilised for control purposes.

  7. Integrated Management Systems::a model based on a total quality approach

    OpenAIRE

    Dale, B G.

    2001-01-01

    Describes a model based on empirical research which provides the details of an integrated management system and takes into account existing and accepted definitions of quality management, environmental management, and occupational health and safety management systems. Claims that the model addresses the issues of scope and culture which none of the existing integrated management system models address. The model has been tested with a sample of members of the British Standards Society. Among t...

  8. An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects

    Science.gov (United States)

    Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.

    2014-02-01

    Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.

  9. Applying of an Ontology based Modeling Approach to Cultural Heritage Systems

    Directory of Open Access Journals (Sweden)

    POPOVICI, D.-M.

    2011-08-01

    Full Text Available Any virtual environment (VE built in a classical way is dedicated to a very specific domain. Its modification or even adaptation to another domain requires an expensive human intervention measured in time and money. This way, the product, that means the VE, returns at the first phases of the development process. In a previous work we proposed an approach that combines domain ontologies and conceptual modeling to construct more accurate VEs. Our method is based on the description of the domain knowledge in a standard format and the assisted creation (using these pieces of knowledge of the VE. This permits the explanation within the virtual reality (VR simulation of the semantic of the whole context and of each object. This knowledge may be then transferred to the public users. In this paper we prove the effectiveness of our method on the construction process of an VE that simulates the organization of a Greek-Roman colony situated on the Black Sea coast and the economic and social activities of its people.

  10. Approaches to Modelling the Dynamical Activity of Brain Function Based on the Electroencephalogram

    Science.gov (United States)

    Liley, David T. J.; Frascoli, Federico

    The brain is arguably the quintessential complex system as indicated by the patterns of behaviour it produces. Despite many decades of concentrated research efforts, we remain largely ignorant regarding the essential processes that regulate and define its function. While advances in functional neuroimaging have provided welcome windows into the coarse organisation of the neuronal networks that underlie a range of cognitive functions, they have largely ignored the fact that behaviour, and by inference brain function, unfolds dynamically. Modelling the brain's dynamics is therefore a critical step towards understanding the underlying mechanisms of its functioning. To date, models have concentrated on describing the sequential organisation of either abstract mental states (functionalism, hard AI) or the objectively measurable manifestations of the brain's ongoing activity (rCBF, EEG, MEG). While the former types of modelling approach may seem to better characterise brain function, they do so at the expense of not making a definite connection with the actual physical brain. Of the latter, only models of the EEG (or MEG) offer a temporal resolution well matched to the anticipated temporal scales of brain (mental processes) function. This chapter will outline the most pertinent of these modelling approaches, and illustrate, using the electrocortical model of Liley et al, how the detailed application of the methods of nonlinear dynamics and bifurcation theory is central to exploring and characterising their various dynamical features. The rich repertoire of dynamics revealed by such dynamical systems approaches arguably represents a critical step towards an understanding of the complexity of brain function.

  11. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  12. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Modelling of cooperating robotized systems with the use of object-based approach

    Science.gov (United States)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.

  14. Comparisons of node-based and element-based approaches of assigning bone material properties onto subject-specific finite element models.

    Science.gov (United States)

    Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F

    2015-08-01

    Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Approach and development strategy for an agent-based model of economic confidence.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.; Pryor, Richard J.; Jorgensen, Craig Reed

    2004-08-01

    We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.

  16. A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems

    OpenAIRE

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Traum, D.; Alexandersson, J.; Jonsson, A.; Zukerman, I.

    2007-01-01

    We propose a new approach to developing a tractable affective dialogue model for general probabilistic frame-based dialogue systems. The dialogue model, based on the Partially Observable Markov Decision Process (POMDP) and the Dynamic Decision Network (DDN) techniques, is composed of two main parts, the slot level dialogue manager and the global dialogue manager. Our implemented dialogue manager prototype can handle hundreds of slots; each slot might have many values. A first evaluation of th...

  17. A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems

    OpenAIRE

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob

    2006-01-01

    We propose a new approach to developing a tractable affective dialogue model for general probabilistic frame-based dialogue systems. The dialogue model, based on the Partially Observable Markov Decision Process (POMDP) and the Dynamic Decision Network (DDN) techniques, is composed of two main parts, the slot level dialogue manager and the global dialogue manager. It has two new features: (1) being able to deal with a large number of slots and (2) being able to take into account some aspects o...

  18. Adoption Model of Falcataria-Based Farm Forestry: A Duration Analysis Approach

    Directory of Open Access Journals (Sweden)

    Evi Irawan

    2016-06-01

    Full Text Available Integrating perennial plant, such as Falcataria moluccana, in farming system can provide economic and environmental benefits, especially in marginal areas. Indonesian governments at all levels have been employing a number of efforts to speed-up adoption of tree planting on farm.  However, the establishment of farm forestry on private land in Indonesia, especially in Java, is widely varied.  While the farm forestry in some locations has been well adopted, the farmers or land users in other location are reluctant to adopt them, although the traits of farmers and farm land in both locations are similar. Most adoption studies have employed cross-sectional data in a static discrete choice modeling framework to analyze why some farmers adopt at a certain point in time.  The static approach does not consider the dynamic environment in which the adoption decision is made and thus does not incorporate speed of adoption.  The information of adoption speed of an innovation is important in designing extension policies as well as reengineering innovations in order to align with socio-economic conditions of the farmers.  Based on data from a survey of a random sample of 117 smallholder households in Wonosobo Regency, Central Java, Indonesia, this study investigated determinants of time to adoption of farm forestry using duration analysis. Results revealed that factors that accelerate the adoption varied include age of household head, level of education of household head, off-farm employment and output price. Older farmers tend to adopt faster than the younger farmers. The other interesting findings are that off-farm employment and membership to farmers group are two most influential factors in speeding-up adoption of Falcataria-based farm forestry. The policy implications of this research are that government should design policies that promote farmers’ participation in off-farm income activities and strengthening farmer groups in addition to extension

  19. Learning Outcomes in Vocational Education: A Business Plan Development by Production-Based Learning Model Approach

    Science.gov (United States)

    Kusumaningrum, Indrati; Hidayat, Hendra; Ganefri; Anori, Sartika; Dewy, Mega Silfia

    2016-01-01

    This article describes the development of a business plan by using production-based learning approach. In addition, this development also aims to maximize learning outcomes in vocational education. Preliminary analysis of curriculum and learning and the needs of the market and society become the basic for business plan development. To produce a…

  20. Exponentiated Weibull distribution approach based inflection S-shaped software reliability growth model

    Directory of Open Access Journals (Sweden)

    B.B. Sagar

    2016-09-01

    Full Text Available The aim of this paper was to estimate the number of defects in software and remove them successfully. This paper incorporates Weibull distribution approach along with inflection S-shaped Software Reliability Growth Models (SRGM. In this combination two parameter Weibull distribution methodology is used. Relative Prediction Error (RPE is calculated to predict the validity criterion of the developed model. Experimental results on actual data from five data sets are compared with two other existing models, which expose that the proposed software reliability growth model predicts better estimation to remove the defects. This paper presents best software reliability growth model with including feature of both Weibull distribution and inflection S-shaped SRGM to estimate the defects of software system, and provide help to researchers and software industries to develop highly reliable software products.

  1. Equilibrium and non-equilibrium concepts in forest genetic modelling: population- and individually-based approaches

    NARCIS (Netherlands)

    Kramer, K.; Werf, van der D.C.

    2010-01-01

    The environment is changing and so are forests, in their functioning, in species composition, and in the species’ genetic composition. Many empirical and process-based models exist to support forest management. However, most of these models do not consider the impact of environmental changes and

  2. Structuring multidisciplinary knowledge for model-based water management: the HarmoniQuA approach

    NARCIS (Netherlands)

    Scholten, H.; Kassahun, A.; Refsgaard, J.C.

    2004-01-01

    The Water Framework Directive (WFD) provides European policy at the river basin scale. It explicitly states that water resource models should be applied. The EU- financed project HarmoniQuA aims at improving the quality of model based water management at catchment and river basin scales by providing

  3. A Simulation-Based Geostatistical Approach to Real-Time Reconciliation of the Grade Control Model

    NARCIS (Netherlands)

    Wambeke, T.; Benndorf, J.

    2017-01-01

    One of the main challenges of the mining industry is to ensure that produced tonnages and grades are aligned with targets derived from model-based expectations. Unexpected deviations, resulting from large uncertainties in the grade control model, often occur and strongly impact resource recovery

  4. A Simple Model for Complex Fabrication of MEMS based Pressure Sensor: A Challenging Approach

    Directory of Open Access Journals (Sweden)

    Himani SHARMA

    2010-08-01

    Full Text Available In this paper we have presented the simple model for complex fabrication of MEMS based absolute micro pressure sensor. This kind of modeling is extremely useful for determining its complexity in fabrication steps and provides complete information about process sequence to be followed during manufacturing. Therefore, the need for test iteration decreases and cost, time can be reduced significantly. By using DevEdit tool (part of SILVACO tool, a behavioral model of pressure sensor have been presented and implemented.

  5. FEM-based neural-network approach to nonlinear modeling with application to longitudinal vehicle dynamics control.

    Science.gov (United States)

    Kalkkuhl, J; Hunt, K J; Fritz, H

    1999-01-01

    An finite-element methods (FEM)-based neural-network approach to Nonlinear AutoRegressive with eXogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented.

  6. A model-based approach to preplanting risk assessment for gray leaf spot of maize.

    Science.gov (United States)

    Paul, P A; Munkvold, G P

    2004-12-01

    ABSTRACT Risk assessment models for gray leaf spot of maize, caused by Cercospora zeae-maydis, were developed using preplanting site and maize genotype data as predictors. Disease severity at the dough/dent plant growth stage was categorized into classes and used as the response variable. Logistic regression and classification and regression tree (CART) modeling approaches were used to predict severity classes as a function of planting date (PD), amount of maize soil surface residue (SR), cropping sequence, genotype maturity and gray leaf spot resistance (GLSR) ratings, and longitude (LON). Models were development using 332 cases collected between 1998 and 2001. Thirty cases collected in 2002 were used to validate the models. Preplanting data showed a strong relationship with late-season gray leaf spot severity classes. The most important predictors were SR, PD, GLSR, and LON. Logistic regression models correctly classified 60 to 70% of the validation cases, whereas the CART models correctly classified 57 to 77% of these cases. Cases misclassified by the CART models were mostly due to overestimation, whereas the logistic regression models tended to misclassify cases by underestimation. Both the CART and logistic regression models have potential as management decision-making tools. Early quantitative assessment of gray leaf spot risk would allow for more sound management decisions being made when warranted.

  7. A review of single-sample-based models and other approaches for radiocarbon dating of dissolved inorganic carbon in groundwater

    Science.gov (United States)

    Han, L. F; Plummer, Niel

    2016-01-01

    Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of

  8. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  9. A Computational Agent-Based Modeling Approach for Competitive Wireless Service Market

    KAUST Repository

    Douglas, C C

    2011-04-01

    Using an agent-based modeling method, we study market dynamism with regard to wireless cellular services that are in competition for a greater market share and profit. In the proposed model, service providers and consumers are described as agents who interact with each other and actively participate in an economically well-defined marketplace. Parameters of the model are optimized using the Levenberg-Marquardt method. The quantitative prediction capabilities of the proposed model are examined through data reproducibility using past data from the U.S. and Korean wireless service markets. Finally, we investigate a disruptive market event, namely the introduction of the iPhone into the U.S. in 2007 and the resulting changes in the modeling parameters. We predict and analyze the impacts of the introduction of the iPhone into the Korean wireless service market assuming a release date of 2Q09 based on earlier data. © 2011 IEEE.

  10. Modeling the relationship between body weight and energy intake: a molecular diffusion-based approach.

    Science.gov (United States)

    Gong, Zhejun; Gong, Zhefeng

    2012-06-29

    Body weight is at least partly controlled by the choices made by a human in response to external stimuli. Changes in body weight are mainly caused by energy intake. By analyzing the mechanisms involved in food intake, we considered that molecular diffusion plays an important role in body weight changes. We propose a model based on Fick's second law of diffusion to simulate the relationship between energy intake and body weight. This model was applied to food intake and body weight data recorded in humans; the model showed a good fit to the experimental data. This model was also effective in predicting future body weight. In conclusion, this model based on molecular diffusion provides a new insight into the body weight mechanisms. This article was reviewed by Dr. Cabral Balreira (nominated by Dr. Peter Olofsson), Prof. Yang Kuang and Dr. Chao Chen.

  11. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  12. A Symbol Spotting Approach Based on the Vector Model and a Visual Vocabulary

    OpenAIRE

    Nguyen , Thi Oanh; Tabbone , Salvatore; Boucher , Alain

    2009-01-01

    International audience; This paper addresses the difficult problem of symbol spotting for graphic documents. We propose an approach where each graphic document is indexed as a text document by using the vector model and an inverted file structure. The method relies on a visual vocabulary built from a shape descriptor adapted to the document level and invariant under classical geometric transforms (rotation, scaling and translation). Regions of interest selected with high degree of confidence ...

  13. Fast simulation approaches for power fluctuation model of wind farm based on frequency domain

    DEFF Research Database (Denmark)

    Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang

    2012-01-01

    This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....

  14. Insurance based lie detection: Enhancing the verifiability approach with a model statement component.

    Science.gov (United States)

    Harvey, Adam C; Vrij, Aldert; Leal, Sharon; Lafferty, Marcus; Nahari, Galit

    2017-03-01

    The Verifiability Approach (VA) is verbal lie detection tool that has shown promise when applied to insurance claims settings. This study examined the effectiveness of incorporating a Model Statement comprised of checkable information to the VA protocol for enhancing the verbal differences between liars and truth tellers. The study experimentally manipulated supplementing (or withholding) the VA with a Model Statement. It was hypothesised that such a manipulation would (i) encourage truth tellers to provide more verifiable details than liars and (ii) encourage liars to report more unverifiable details than truth tellers (compared to the no model statement control). As a result, it was hypothesized that (iii) the model statement would improve classificatory accuracy of the VA. Participants reported 40 genuine and 40 fabricated insurance claim statements, in which half the liars and truth tellers where provided with a model statement as part of the VA procedure, and half where provide no model statement. All three hypotheses were supported. In terms of accuracy, the model statement increased classificatory rates by the VA considerably from 65.0% to 90.0%. Providing interviewee's with a model statement prime consisting of checkable detail appears to be a useful refinement to the VA procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Innovation in Integrated Chemical Product-Process Design - Development through a Model-based Systems Approach

    DEFF Research Database (Denmark)

    Conte, Elisa

    in which experiments are planned and a third stage in which experiments are performed to validate the final product formula. The main focus of the project is on the development of the computer-aided stage of the methodology described above. The methodology considers two different scenarios: the design...... appropriate model-based screening techniques are employed. In the verification scenario, a shortlist of candidate ingredients is provided, therefore the problem size is much smaller and rigorous property models can be employed/developed. When using computer-aided tools for product design, several issues need...... to be addressed: new property models may need to be developed and/or the application range of existing property models may need to be extended (that is, new model parameters are needed), new and more efficient methods and tools for the application of the models may need to be developed, together with a flexible...

  16. Bounded Rational Managers Struggle with Talent Management - An Agent-based Modelling Approach

    DEFF Research Database (Denmark)

    Adamsen, Billy; Thomsen, Svend Erik

    of interaction between agents (Gilbert, 2008). Social systems where dependencies among the agents are important have been referred to as complex systems. The field of complex systems challenges the notion that by perfectly understanding the behavior of each component part of a system we will then understand......’s intervention that is causing the observed effects. A computer simulation is an abstract representation of something in the real social world. Deriving the behavior of a simulation model analytically is useful because it provides information about how the model will behave given a range of inputs......, and by experimenting with different inputs it is possible to learn how the model behaves. The model is used to simulate the real world as it might be in a variety of circumstances (Gilbert, 2008). For this study a simulation model coded in Java-based NetLogo language was created. The simulation model contained only...

  17. Addressing global uncertainty and sensitivity in first-principles based microkinetic models by an adaptive sparse grid approach

    Science.gov (United States)

    Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian

    2018-01-01

    In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

  18. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Science.gov (United States)

    2009-01-01

    Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754

  19. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel

    OpenAIRE

    Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.

    2016-01-01

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based...

  20. A logic-based dynamic modeling approach to explicate the evolution of the central dogma of molecular biology.

    Science.gov (United States)

    Jafari, Mohieddin; Ansari-Pour, Naser; Azimzadeh, Sadegh; Mirzaie, Mehdi

    It is nearly half a century past the age of the introduction of the Central Dogma (CD) of molecular biology. This biological axiom has been developed and currently appears to be all the more complex. In this study, we modified CD by adding further species to the CD information flow and mathematically expressed CD within a dynamic framework by using Boolean network based on its present-day and 1965 editions. We show that the enhancement of the Dogma not only now entails a higher level of complexity, but it also shows a higher level of robustness, thus far more consistent with the nature of biological systems. Using this mathematical modeling approach, we put forward a logic-based expression of our conceptual view of molecular biology. Finally, we show that such biological concepts can be converted into dynamic mathematical models using a logic-based approach and thus may be useful as a framework for improving static conceptual models in biology.

  1. Collision-model-based approach to non-Markovian quantum dynamics

    OpenAIRE

    Ciccarello, F.; Palma, G.; Giovannetti, V.

    2013-01-01

    We present a theoretical framework to tackle quantum non-Markovian dynamics based on a microscopic collision model (CM), where the bath consists of a large collection of initially uncorrelated ancillas. Unlike standard memoryless CMs, we endow the bath with memory by introducing inter-ancillary collisions between next system-ancilla interactions. Our model interpolates between a fully Markovian dynamics and the continuous interaction of the system with a single ancilla, i.e., a strongly non-M...

  2. Railway Container Station Reselection Approach and Application: Based on Entropy-Cloud Model

    Directory of Open Access Journals (Sweden)

    Wencheng Huang

    2017-01-01

    Full Text Available Reasonable railway container freight stations layout means higher transportation efficiency and less transportation cost. To obtain more objective and accurate reselection results, a new entropy-cloud approach is formulated to solve the problem. The approach comprises three phases: Entropy Method is used to obtain the weight of each subcriterion during Phase  1, then cloud model is designed to form the evaluation cloud for each subcriterion during Phase  2, and finally during Phase  3 we use the weight during Phase  1 to multiply the initial evaluation cloud during Phase  2. MATLAB is applied to determine the evaluation figures and help us to make the final alternative decision. To test our approach, the railway container stations in Wuhan Railway Bureau were selected for our case study. The final evaluation result indicates only Xiangyang Station should be renovated and developed as a Special Transaction Station, five other stations should be kept and developed as Ordinary Stations, and the remaining 16 stations should be closed. Furthermore, the results show that, before the site reselection process, the average distance between two railway container stations was only 74.7 km but has improved to 182.6 km after using the approach formulated in this paper.

  3. Analysis of GARCH modeling in financial markets: an approach based on technical analysis strategies

    Directory of Open Access Journals (Sweden)

    Mircea Cristian Gherman

    2011-08-01

    Full Text Available In this paper we performed an analysis in order the make an evidence of GARCH modeling on the performances of trading rules applied for a stock market index. Our study relays on the overlap between econometrical modeling, technical analysis and a simulation computing technique. The non-linear structures presented in the daily returns of the analyzed index and also in other financial series, together with the phenomenon of volatility clustering are premises for applying a GARCH model. In our approach the standardized GARCH innovations are resampled using the bootstrap method. On the simulated data are then applied technical analysis trading strategies. For all the simulated paths the “p-values” are computed in order to verify that the hypothesis concerning the goodness of fit for GARCH model on the BET index is accepted. The processed data with trading rules are showing evidence that GARCH model is a good choice for econometrical modeling of financial time series including the romanian exchange trade index.

  4. Is equine colic seasonal? Novel application of a model based approach

    Directory of Open Access Journals (Sweden)

    Proudman Christopher J

    2006-08-01

    Full Text Available Abstract Background Colic is an important cause of mortality and morbidity in domesticated horses yet many questions about this condition remain to be answered. One such question is: does season have an effect on the occurrence of colic? Time-series analysis provides a rigorous statistical approach to this question but until now, to our knowledge, it has not been used in this context. Traditional time-series modelling approaches have limited applicability in the case of relatively rare diseases, such as specific types of equine colic. In this paper we present a modelling approach that respects the discrete nature of the count data and, using a regression model with a correlated latent variable and one with a linear trend, we explored the seasonality of specific types of colic occurring at a UK referral hospital between January 1995–December 2004. Results Six- and twelve-month cyclical patterns were identified for all colics, all medical colics, epiploic foramen entrapment (EFE, equine grass sickness (EGS, surgically treated and large colon displacement/torsion colic groups. A twelve-month cyclical pattern only was seen in the large colon impaction colic group. There was no evidence of any cyclical pattern in the pedunculated lipoma group. These results were consistent irrespective of whether we were using a model including latent correlation or trend. Problems were encountered in attempting to include both trend and latent serial dependence in models simultaneously; this is likely to be a consequence of a lack of power to separate these two effects in the presence of small counts, yet in reality the underlying physical effect is likely to be a combination of both. Conclusion The use of a regression model with either an autocorrelated latent variable or a linear trend has allowed us to establish formally a seasonal component to certain types of colic presented to a UK referral hospital over a 10 year period. These patterns appeared to coincide

  5. Evaluation of conditional non-linear optimal perturbation obtained by an ensemble-based approach using the Lorenz-63 model

    Directory of Open Access Journals (Sweden)

    Xudong Yin

    2014-02-01

    Full Text Available The authors propose to implement conditional non-linear optimal perturbation related to model parameters (CNOP-P through an ensemble-based approach. The approach was first used in our earlier study and is improved to be suitable for calculating CNOP-P. Idealised experiments using the Lorenz-63 model are conducted to evaluate the performance of the improved ensemble-based approach. The results show that the maximum prediction error after optimisation has been multiplied manifold compared with the initial-guess prediction error, and is extremely close to, or greater than, the maximum value of the exhaustive attack method (a million random samples. The calculation of CNOP-P by the ensemble-based approach is capable of maintaining a high accuracy over a long prediction time under different constraints and initial conditions. Further, the CNOP-P obtained by the approach is applied to sensitivity analysis of the Lorenz-63 model. The sensitivity analysis indicates that when the prediction time is set to 0.2 time units, the Lorenz-63 model becomes extremely insensitive to one parameter, which leaves the other two parameters to affect the uncertainty of the model. Finally, a serial of parameter estimation experiments are performed to verify sensitivity analysis. It is found that when the three parameters are estimated simultaneously, the insensitive parameter is estimated much worse, but the Lorenz-63 model can still generate a very good simulation thanks to the relatively accurate values of the other two parameters. When only two sensitive parameters are estimated simultaneously and the insensitive parameter is left to be non-optimised, the outcome is better than the case when the three parameters are estimated simultaneously. With the increase of prediction time and observation, however, the model sensitivity to the insensitive parameter increases accordingly and the insensitive parameter can also be estimated successfully.

  6. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  7. Flow-based dissimilarity measures for reservoir models : a spatial-temporal tensor approach

    NARCIS (Netherlands)

    Insuasty, Edwin; van den Hof, P.M.J.; Weiland, Siep; Jansen, J.D.

    2017-01-01

    In reservoir engineering, it is attractive to characterize the difference between reservoir models in metrics that relate to the economic performance of the reservoir as well as to the underlying geological structure. In this paper, we develop a dissimilarity measure that is based on reservoir

  8. A study of the diffusion of alternative fuel vehicles : An agent-based modeling approach

    NARCIS (Netherlands)

    Zhang, Ting; Gensler, Sonja; Garcia, Rosanna

    This paper demonstrates the use of an agent-based model (ABM) to investigate factors that can speed the diffusion of eco-innovations, namely alternative fuel vehicles (AFVs). The ABM provides the opportunity to consider the interdependencies inherent between key participants in the automotive

  9. Prediction of human CNS pharmacokinetics using a physiologically-based pharmacokinetic modeling approach

    NARCIS (Netherlands)

    Yamamoto, Yumi; Valitalo, Pyry A.; Wong, Yin Cheong; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; Kokki, Hannu; Kokki, Merja; Danhof, Meindert; van Hasselt, Johan G. C.; de Lange, Elizabeth C. M.

    2017-01-01

    Knowledge of drug concentration-time profiles at the central nervous system (CNS) target-site is critically important for rational development of CNS targeted drugs. Our aim was to translate a recently published comprehensive CNS physiologically-based pharmacokinetic (PBPK) model from rat to human,

  10. Agent-Based Model Approach to Complex Phenomena in Real Economy

    Science.gov (United States)

    Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.

    An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.

  11. A simple macroscopic root water uptake model based on the hydraulic architecture approach

    Science.gov (United States)

    Couvreur, V.; Vanderborght, J.; Javaux, M.

    2012-12-01

    Many hydrological models including root water uptake (RWU) do not consider the dimension of root system hydraulic architecture (HA) because explicitly solving water flow in such a complex system is too time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed a model implicitly accounting for the root system HA, allowing the simulation of both RWU distribution and plant water stress, in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale, or at the plant scale, rather than for each segment of the root system architecture: the standard sink fraction distribution SSF, the root system equivalent conductance Krs and the compensatory RWU conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU, and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a complex root system HA, the new model showed to be accurate and fast, in dissimilar water dynamics scenarios, while keeping the same parameter set. With the proposed model, new concepts are brought which open avenues towards simple and mechanistic RWU models and water stress functions operational for field scale water dynamics simulation. A further study focuses on its applicability for simulating one-dimensional water dynamics in wheat cropped fields.

  12. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  13. Unraveling the unsustainability spiral in sub-Saharan Africa: an agent based modelling approach

    Directory of Open Access Journals (Sweden)

    Joep A. van den Broek

    2007-10-01

    Full Text Available Sub-Saharan Africa is trapped in a complex unsustainability spiral with demographic, biophysical, technical and socio-political dimensions. Unravelling the spiral is vital to perceive which policy actions are needed to reverse it and initiate sustainable pro-poor growth. The article presents an evolutionary, multi-agent modelling framework that marries a socio-ecological approach to a world system perspective and takes agriculture as the engine for sustainable development in sub-Saharan Africa. A number of possibilities for empirical validation are proposed.

  14. A physically-based approach to reflection separation: from physical modeling to constrained optimization.

    Science.gov (United States)

    Kong, Naejin; Tai, Yu-Wing; Shin, Joseph S

    2014-02-01

    We propose a physically-based approach to separate reflection using multiple polarized images with a background scene captured behind glass. The input consists of three polarized images, each captured from the same view point but with a different polarizer angle separated by 45 degrees. The output is the high-quality separation of the reflection and background layers from each of the input images. A main technical challenge for this problem is that the mixing coefficient for the reflection and background layers depends on the angle of incidence and the orientation of the plane of incidence, which are spatially varying over the pixels of an image. Exploiting physical properties of polarization for a double-surfaced glass medium, we propose a multiscale scheme which automatically finds the optimal separation of the reflection and background layers. Through experiments, we demonstrate that our approach can generate superior results to those of previous methods.

  15. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    Science.gov (United States)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  16. A physiologically based pharmacokinetic modelling approach to predict buprenorphine pharmacokinetics following intravenous and sublingual administration.

    Science.gov (United States)

    Kalluri, Hari V; Zhang, Hongfei; Caritis, Steve N; Venkataramanan, Raman

    2017-11-01

    Opioid dependence is associated with high morbidity and mortality. Buprenorphine (BUP) is approved by the Food and Drug Administration to treat opioid dependence. There is a lack of clear consensus on the appropriate dosing of BUP due to interpatient physiological differences in absorption/disposition, subjective response assessment and other patient comorbidities. The objective of the present study was to build and validate robust physiologically based pharmacokinetic (PBPK) models for intravenous (IV) and sublingual (SL) BUP as a first step to optimizing BUP pharmacotherapy. BUP-PBPK modelling and simulations were performed using SimCyp® by incorporating the physiochemical properties of BUP, establishing intersystem extrapolation factors-based in vitro-in-vivo extrapolation (IVIVE) methods to extrapolate in vitro enzyme activity data, and using tissue-specific plasma partition coefficient estimations. Published data on IV and SL-BUP in opioid-dependent and non-opioid-dependent patients were used to build the models. Fourteen model-naïve BUP-PK datasets were used for inter- and intrastudy validations. The IV and SL-BUP-PBPK models developed were robust in predicting the multicompartment disposition of BUP over a dosing range of 0.3-32 mg. Predicted plasma concentration-time profiles in virtual patients were consistent with reported data across five single-dose IV, five single-dose SL and four multiple dose SL studies. All PK parameter predictions were within 75-137% of the corresponding observed data. The model developed predicted the brain exposure of BUP to be about four times higher than that of BUP in plasma. The validated PBPK models will be used in future studies to predict BUP plasma and brain concentrations based on the varying demographic, physiological and pathological characteristics of patients. © 2017 The British Pharmacological Society.

  17. A LATIN-based model reduction approach for the simulation of cycling damage

    Science.gov (United States)

    Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre

    2017-11-01

    The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.

  18. Availability modeling approach for future circular colliders based on the LHC operation experience

    CERN Document Server

    AUTHOR|(CDS)2096726; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo Johannes

    2016-01-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today’s most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10–20  ab$^-$$^1$ of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for p...

  19. A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models

    Directory of Open Access Journals (Sweden)

    Qiugang Lu

    2013-01-01

    Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.

  20. A Model-Based Approach to Trial-By-Trial P300 Amplitude Fluctuations

    Science.gov (United States)

    Kolossa, Antonio; Fingscheidt, Tim; Wessel, Karl; Kopp, Bruno

    2013-01-01

    It has long been recognized that the amplitude of the P300 component of event-related brain potentials is sensitive to the degree to which eliciting stimuli are surprising to the observers (Donchin, 1981). While Squires et al. (1976) showed and modeled dependencies of P300 amplitudes from observed stimuli on various time scales, Mars et al. (2008) proposed a computational model keeping track of stimulus probabilities on a long-term time scale. We suggest here a computational model which integrates prior information with short-term, long-term, and alternation-based experiential influences on P300 amplitude fluctuations. To evaluate the new model, we measured trial-by-trial P300 amplitude fluctuations in a simple two-choice response time task, and tested the computational models of trial-by-trial P300 amplitudes using Bayesian model evaluation. The results reveal that the new digital filtering (DIF) model provides a superior account of the trial-by-trial P300 amplitudes when compared to both Squires et al.’s (1976) model, and Mars et al.’s (2008) model. We show that the P300-generating system can be described as two parallel first-order infinite impulse response (IIR) low-pass filters and an additional fourth-order finite impulse response (FIR) high-pass filter. Implications of the acquired data are discussed with regard to the neurobiological distinction between short-term, long-term, and working memory as well as from the point of view of predictive coding models and Bayesian learning theories of cortical function. PMID:23404628

  1. Ground state of the Hubbard model: a variational approach based on the maximum entropy principle

    Energy Technology Data Exchange (ETDEWEB)

    Arrachea, L. (Dept. de Fisica, Univ. Nacional de La Plata (Argentina)); Plastino, A. (Dept. de Fisica, Univ. Nacional de La Plata (Argentina)); Canosa, N. (Physik Dept. der Technischen Univ. Muenchen, Garching (Germany)); Rossignoli, R. (Physik Dept. der Technischen Univ. Muenchen, Garching (Germany))

    1993-05-17

    A variational approach based on maximum entropy considerations is used to approximate the ground state of the Hubbard Hamiltonian. The evaluation of both the ground state energy and the correlation functions is performed with a trial wave function, which is parameterized in terms of a small set of variables associated with the relevant correlation operators of the problem. Results for one-dimensional case are in very good agreement with the exact ones for arbitrary interaction strengths. It is also shown that the method provides us with better evaluations of the ground state energy and correlation functions than those obtained with the Gutzwiller approximation. (orig.)

  2. Ground state of the Hubbard model: a variational approach based on the maximum entropy principle

    Science.gov (United States)

    Arrachea, L.; Canosa, N.; Plastino, A.; Rossignoli, R.

    1993-05-01

    A variational approach based on maximum entropy considerations is used to approximate the ground state of the Hubbard Hamiltonian. The evaluation of both the ground state energy and the correlation functions is performed with a trial wave function, which is parameterized in terms of a small set of variables associated with the relevant correlation operators of the problem. Results for the one-dimensional case are in very good agreement with the exact ones for arbitrary interaction strengths. It is also shown that the method provides us with better evaluations of the ground state energy and correlation functions than those obtained with the Gutzwiller approximation.

  3. Minimal important change (MIC) based on a predictive modeling approach was more precise than MIC based on ROC analysis

    NARCIS (Netherlands)

    Terluin, B.; Eekhout, I.; Terwee, C.B.; de Vet, H.C.W.

    2015-01-01

    Objectives To present a new method to estimate a "minimal important change" (MIC) of health-related quality of life (HRQOL) scales, based on predictive modeling, and to compare its performance with the MIC based on receiver operating characteristic (ROC) analysis. To illustrate how the new method

  4. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model.

    Science.gov (United States)

    Luck, Jeff; Hagigi, Fred; Parker, Louise E; Yano, Elizabeth M; Rubenstein, Lisa V; Kirchner, JoAnn E

    2009-09-28

    Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  5. Model-Based Approach to the Evaluation of Task Complexity in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ham, Dong Han

    2007-02-01

    This study developed a model-based method for evaluating task complexity and examined the ways of evaluating the complexity of tasks designed for abnormal situations and daily task situations in NPPs. The main results of this study can be summarised as follows. First, this study developed a conceptual framework for studying complexity factors and a model of complexity factors that classifies complexity factors according to the types of knowledge that human operators use. Second, this study developed a more practical model of task complexity factors and identified twenty-one complexity factors based on the model. The model emphasizes that a task is a system to be designed and its complexity has several dimensions. Third, we developed a method of identifying task complexity factors and evaluating task complexity qualitatively based on the developed model of task complexity factors. This method can be widely used in various task situations. Fourth, this study examined the applicability of TACOM to abnormal situations and daily task situations, such as maintenance and confirmed that it can be reasonably used in those situations. Fifth, we developed application examples to demonstrate the use of the theoretical results of this study. Lastly, this study reinterpreted well-know principles for designing information displays in NPPs in terms of task complexity and suggested a way of evaluating the conceptual design of displays in an analytical way by using the concept of task complexity. All of the results of this study will be used as a basis when evaluating the complexity of tasks designed on procedures or information displays and designing ways of improving human performance in NPPs

  6. mRNA translation and protein synthesis: an analysis of different modelling methodologies and a new PBN based approach.

    Science.gov (United States)

    Zhao, Yun-Bo; Krishnan, J

    2014-02-27

    mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may

  7. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  8. A Bio-Inspired Model-Based Approach for Context-Aware Post-WIMP Tele-Rehabilitation

    Directory of Open Access Journals (Sweden)

    Víctor López-Jaquero

    2016-10-01

    Full Text Available Tele-rehabilitation is one of the main domains where Information and Communication Technologies (ICT have been proven useful to move healthcare from care centers to patients’ home. Moreover, patients, especially those carrying out a physical therapy, cannot use a traditional Window, Icon, Menu, Pointer (WIMP system, but they need to interact in a natural way, that is, there is a need to move from WIMP systems to Post-WIMP ones. Moreover, tele-rehabilitation systems should be developed following the context-aware approach, so that they are able to adapt to the patients’ context to provide them with usable and effective therapies. In this work a model-based approach is presented to assist stakeholders in the development of context-aware Post-WIMP tele-rehabilitation systems. It entails three different models: (i a task model for designing the rehabilitation tasks; (ii a context model to facilitate the adaptation of these tasks to the context; and (iii a bio-inspired presentation model to specify thoroughly how such tasks should be performed by the patients. Our proposal overcomes one of the limitations of the model-based approach for the development of context-aware systems supporting the specification of non-functional requirements. Finally, a case study is used to illustrate how this proposal can be put into practice to design a real world rehabilitation task.

  9. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  10. The Hunt Opinion Model-An Agent Based Approach to Recurring Fashion Cycles.

    Science.gov (United States)

    Apriasz, Rafał; Krueger, Tyll; Marcjasz, Grzegorz; Sznajd-Weron, Katarzyna

    2016-01-01

    We study a simple agent-based model of the recurring fashion cycles in the society that consists of two interacting communities: "snobs" and "followers" (or "opinion hunters", hence the name of the model). Followers conform to all other individuals, whereas snobs conform only to their own group and anticonform to the other. The model allows to examine the role of the social structure, i.e. the influence of the number of inter-links between the two communities, as well as the role of the stability of links. The latter is accomplished by considering two versions of the same model-quenched (parameterized by fraction L of fixed inter-links) and annealed (parameterized by probability p that a given inter-link exists). Using Monte Carlo simulations and analytical treatment (the latter only for the annealed model), we show that there is a critical fraction of inter-links, above which recurring cycles occur. For p ≤ 0.5 we derive a relation between parameters L and p that allows to compare both models and show that the critical value of inter-connections, p*, is the same for both versions of the model (annealed and quenched) but the period of a fashion cycle is shorter for the quenched model. Near the critical point, the cycles are irregular and a change of fashion is difficult to predict. For the annealed model we also provide a deeper theoretical analysis. We conjecture on topological grounds that the so-called saddle node heteroclinic bifurcation appears at p*. For p ≥ 0.5 we show analytically the existence of the second critical value of p, for which the system undergoes Hopf's bifurcation.

  11. MODELING OF INVESTMENT STRATEGIES IN STOCKS MARKETS: AN APPROACH FROM MULTI AGENT BASED SIMULATION AND FUZZY LOGIC

    Directory of Open Access Journals (Sweden)

    ALEJANDRO ESCOBAR

    2010-01-01

    Full Text Available This paper presents a simulation model of a complex system, in this case a financial market, using a MultiAgent Based Simulation approach. Such model takes into account microlevel aspects like the Continuous Double Auction mechanism, which is widely used within stock markets, as well as investor agents reasoning who participate looking for profits. To model such reasoning several variables were considered including general stocks information like profitability and volatility, but also some agent's aspects like their risk tendency. All these variables are incorporated throughout a fuzzy logic approach trying to represent in a faithful manner the kind of reasoning that nonexpert investors have, including a stochastic component in order to model human factors.

  12. A continuum-based structural modeling approach for cellulose nanocrystals (CNCs)

    Science.gov (United States)

    Shishehbor, Mehdi; Dri, Fernando L.; Moon, Robert J.; Zavattieri, Pablo D.

    2018-02-01

    We present a continuum-based structural model to study the mechanical behavior of cellulose nanocrystals (CNCs), and analyze the effect of bonded and non-bonded interactions on the mechanical properties under various loading conditions. In particular, this model assumes the uncoupling between the bonded and non-bonded interactions and their behavior is obtained from atomistic simulations. Our results indicates that the major contribution to the tensile and bending stiffness is mainly due to the cellulose chain stiffness, and the shear behavior is mainly governed by Van der Waals (VdW) forces. In addition, we report a negligible torsional stiffness, which may explain the CNC tendency to easily twist under very small or nonexistent torques. In addition, the sensitivity of geometrical imperfection on the mechanical properties using an analytical model of the CNC structure was investigated. Our results indicate that the presence of imperfections have a small influence on the majority of the elastic properties. Finally, it is shown that a simple homogeneous and orthotropic representation of a CNC under bending underestimates the contribution of non-bonded interaction leading up to 60% error in the calculation of the bending stiffness of CNCs. On the other hand, the proposed model can lead to more accurate predictions of the elastic behavior of CNCs. This is the first step toward the development of a more efficient model that can be used to model the inelastic behavior of single and multiple CNCs.

  13. Tracking control of nonlinear lumped mechanical continuous-time systems: A model-based iterative learning approach

    Science.gov (United States)

    Smolders, K.; Volckaert, M.; Swevers, J.

    2008-11-01

    This paper presents a nonlinear model-based iterative learning control procedure to achieve accurate tracking control for nonlinear lumped mechanical continuous-time systems. The model structure used in this iterative learning control procedure is new and combines a linear state space model and a nonlinear feature space transformation. An intuitive two-step iterative algorithm to identify the model parameters is presented. It alternates between the estimation of the linear and the nonlinear model part. It is assumed that besides the input and output signals also the full state vector of the system is available for identification. A measurement and signal processing procedure to estimate these signals for lumped mechanical systems is presented. The iterative learning control procedure relies on the calculation of the input that generates a given model output, so-called offline model inversion. A new offline nonlinear model inversion method for continuous-time, nonlinear time-invariant, state space models based on Newton's method is presented and applied to the new model structure. This model inversion method is not restricted to minimum phase models. It requires only calculation of the first order derivatives of the state space model and is applicable to multivariable models. For periodic reference signals the method yields a compact implementation in the frequency domain. Moreover it is shown that a bandwidth can be specified up to which learning is allowed when using this inversion method in the iterative learning control procedure. Experimental results for a nonlinear single-input-single-output system corresponding to a quarter car on a hydraulic test rig are presented. It is shown that the new nonlinear approach outperforms the linear iterative learning control approach which is currently used in the automotive industry on durability test rigs.

  14. An Entropy-Based Approach to Path Analysis of Structural Generalized Linear Models: A Basic Idea

    Directory of Open Access Journals (Sweden)

    Nobuoki Eshima

    2015-07-01

    Full Text Available A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method.

  15. Filling the voids in the SRTM elevation model — A TIN-based delta surface approach

    Science.gov (United States)

    Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas

    The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.

  16. Simulating Transport and Land Use Interdependencies for Strategic Urban Planning—An Agent Based Modelling Approach

    Directory of Open Access Journals (Sweden)

    Nam Huynh

    2015-10-01

    Full Text Available Agent based modelling has been widely accepted as a promising tool for urban planning purposes thanks to its capability to provide sophisticated insights into the social behaviours and the interdependencies that characterise urban systems. In this paper, we report on an agent based model, called TransMob, which explicitly simulates the mutual dynamics between demographic evolution, transport demands, housing needs and the eventual change in the average satisfaction of the residents of an urban area. The ability to reproduce such dynamics is a unique feature that has not been found in many of the like agent based models in the literature. TransMob, is constituted by six major modules: synthetic population, perceived liveability, travel diary assignment, traffic micro-simulator, residential location choice, and travel mode choice. TransMob is used to simulate the dynamics of a metropolitan area in South East of Sydney, Australia, in 2006 and 2011, with demographic evolution. The results are favourably compared against survey data for the area in 2011, therefore validating the capability of TransMob to reproduce the observed complexity of an urban area. We also report on the application of TransMob to simulate various hypothetical scenarios of urban planning policies. We conclude with discussions on current limitations of TransMob, which serve as suggestions for future developments.

  17. A robotics-based approach to modeling of choice reaching experiments on visual attention

    Directory of Open Access Journals (Sweden)

    Soeren eStrauss

    2012-04-01

    Full Text Available The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements towards a target in an odd-colour search task, i.e. reaching for a green square among red squares and vice versa (e.g. Song & Nakayama, 2008. Interestingly these studies found that in a high number of trials movements were initially directed towards a distractor and only later were adjusted towards the target. These curved trajectories occurred particularly frequently when the target in the directly preceding trial had a different colour (priming effect. Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke & Backhaus, 2011; Heinke & Humphreys, 2003. To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen & Schoener, 2002. The robot arm is able to mimic the results of the odd-colour search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper.

  18. Compression moulding simulations of SMC using a multiobjective surrogate-based inverse modeling approach

    Science.gov (United States)

    Marjavaara, B. D.; Ebermark, S.; Lundström, T. S.

    2009-09-01

    A multiobjective surrogate-based inverse modeling technique to predict the spatial and temporal pressure distribution numerically during the fabrication of sheet moulding compounds (SMCs) is introduced. Specifically, an isotropic temperature-dependent Newtonian viscosity model of a SMC charge is fitted to experimental measurements via numerical simulations in order to mimic the temporal pressure distribution at two spatial locations simultaneously. The simulations are performed by using the commercial computational fluid dynamics (CFD) code ANSYS CFX-10.0, and the multiobjective surrogate-based fitting procedure proposed is carried out with a hybrid formulation of the NSGA-IIa evolutionary algorithm and the response surface methodology in Matlab. The outcome of the analysis shows the ability of the optimization framework to efficiently reduce the total computational load of the problem. Furthermore, the viscosity model assumed seems to be able to re solve the temporal pressure distribution and the advancing flow front accurately, which can not be said of the spatial pressure distribution. Hence, it is recommended to improve the CFD model proposed in order to better capture the true behaviour of the mould flow.

  19. An Estimation of QoS for Classified Based Approach and Nonclassified Based Approach of Wireless Agriculture Monitoring Network Using a Network Model

    Directory of Open Access Journals (Sweden)

    Ismail Ahmedy

    2017-01-01

    Full Text Available Wireless Sensor Network (WSN can facilitate the process of monitoring the crops through agriculture monitoring network. However, it is challenging to implement the agriculture monitoring network in large scale and large distributed area. Typically, a large and dense network as a form of multihop network is used to establish communication between source and destination. This network continuously monitors the crops without sensitivity classification that can lead to message collision and packets drop. Retransmissions of drop messages can increase the energy consumption and delay. Therefore, to ensure a high quality of service (QoS, we propose an agriculture monitoring network that monitors the crops based on their sensitivity conditions wherein the crops with higher sensitivity are monitored constantly, while less sensitive crops are monitored occasionally. This approach selects a set of nodes rather than utilizing all the nodes in the network which reduces the power consumption in each node and network delay. The QoS of the proposed classified based approach is compared with the nonclassified approach in two scenarios; the backoff periods are changed in the first scenario while the numbers of nodes are changed in the second scenario. The simulation results demonstrate that the proposed approach outperforms the nonclassified approach on different test scenarios.

  20. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2017-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  1. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  2. An Integrated Model for Simulating Regional Water Resources Based on Total Evapotranspiration Control Approach

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-01-01

    Full Text Available Total evapotranspiration and water consumption (ET control is considered an efficient method for water management. In this study, we developed a water allocation and simulation (WAS model, which can simulate the water cycle and output different ET values for natural and artificial water use, such as crop evapotranspiration, grass evapotranspiration, forest evapotranspiration, living water consumption, and industry water consumption. In the calibration and validation periods, a “piece-by-piece” approach was used to evaluate the model from runoff to ET data, including the remote sensing ET data and regional measured ET data, which differ from the data from the traditional hydrology method. We applied the model to Tianjin City, China. The Nash-Sutcliffe efficiency (Ens of the runoff simulation was 0.82, and its regression coefficient R2 was 0.92. The Nash-Sutcliffe Efficiency (Ens of regional total ET simulation was 0.93, and its regression coefficient R2 was 0.98. These results demonstrate that ET of irrigation lands is the dominant part, which accounts for 53% of the total ET. The latter is also a priority in ET control for water management.

  3. Model-based approach for cyber-physical attack detection in water distribution systems.

    Science.gov (United States)

    Housh, Mashor; Ohar, Ziv

    2018-03-17

    Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.

  4. A novel model-based approach for dose determination of glycopyrronium bromide in COPD

    Directory of Open Access Journals (Sweden)

    Arievich Helen

    2012-12-01

    Full Text Available Abstract Background Glycopyrronium bromide (NVA237 is an inhaled long-acting muscarinic antagonist in development for treatment of COPD. This study compared the efficacy and safety of once-daily (OD and twice-daily (BID glycopyrronium bromide regimens, using a novel model-based approach, in patients with moderate-to-severe COPD. Methods Double-blind, randomized, dose-finding trial with an eight-treatment, two-period, balanced incomplete block design. Patients (smoking history ≥10 pack-years, post-bronchodilator FEV1 ≥30% and 1/FVC 1 at Day 28. Results 385 patients (mean age 61.2 years; mean post-bronchodilator FEV1 53% predicted were randomized; 88.6% completed. All OD and BID dosing regimens produced dose-dependent bronchodilation; at Day 28, increases in mean trough FEV1 versus placebo were statistically significant for all regimens, ranging from 51 mL (glycopyrronium bromide 12.5 μg OD to 160 mL (glycopyrronium bromide 50 μg BID. Pharmacodynamic steady-state was reached by Day 7. There was a small separation (≤37 mL between BID and OD dose–response curves for mean trough FEV1 at steady-state in favour of BID dosing. Over 24 hours, separation between OD and BID regimens was even smaller (FEV1 AUC0-24h maximum difference for equivalent daily dose regimens: 8 mL. Dose–response results for FEV1 at 12 hours, FEV1 AUC0-12h and FEV1 AUC0-4h at steady-state showed OD regimens provided greater improvement over placebo than BID regimens for total daily doses of 25 μg, 50 μg and 100 μg, while the reverse was true for OD versus BID regimens from 12–24 hours. The 12.5 μg BID dose produced a marginally higher improvement in trough FEV1 versus placebo than 50 μg OD, however, the response at 12 hours over placebo was suboptimal (74 mL. Glycopyrronium bromide was safe and well tolerated at all doses. Conclusions Glycopyrronium bromide 50 μg OD provides significant bronchodilation over a 24 hour period

  5. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  6. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    Directory of Open Access Journals (Sweden)

    Santiago-Omar Caballero-Morales

    2013-01-01

    Full Text Available An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR system was built with Hidden Markov Models (HMMs, where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness. Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR’s output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech.

  7. Bayesian networks and agent-based modeling approach for urban land-use and population density change: a BNAS model

    Science.gov (United States)

    Kocabas, Verda; Dragicevic, Suzana

    2013-10-01

    Land-use change models grounded in complexity theory such as agent-based models (ABMs) are increasingly being used to examine evolving urban systems. The objective of this study is to develop a spatial model that simulates land-use change under the influence of human land-use choice behavior. This is achieved by integrating the key physical and social drivers of land-use change using Bayesian networks (BNs) coupled with agent-based modeling. The BNAS model, integrated Bayesian network-based agent system, presented in this study uses geographic information systems, ABMs, BNs, and influence diagram principles to model population change on an irregular spatial structure. The model is parameterized with historical data and then used to simulate 20 years of future population and land-use change for the City of Surrey, British Columbia, Canada. The simulation results identify feasible new urban areas for development around the main transportation corridors. The obtained new development areas and the projected population trajectories with the“what-if” scenario capabilities can provide insights into urban planners for better and more informed land-use policy or decision-making processes.

  8. Modelling of human exposure to air pollution in the urban environment: a GPS-based approach.

    Science.gov (United States)

    Dias, Daniela; Tchepel, Oxana

    2014-03-01

    The main objective of this work was the development of a new modelling tool for quantification of human exposure to traffic-related air pollution within distinct microenvironments by using a novel approach for trajectory analysis of the individuals. For this purpose, mobile phones with Global Positioning System technology have been used to collect daily trajectories of the individuals with higher temporal resolution and a trajectory data mining, and geo-spatial analysis algorithm was developed and implemented within a Geographical Information System to obtain time-activity patterns. These data were combined with air pollutant concentrations estimated for several microenvironments. In addition to outdoor, pollutant concentrations in distinct indoor microenvironments are characterised using a probabilistic approach. An example of the application for PM2.5 is presented and discussed. The results obtained for daily average individual exposure correspond to a mean value of 10.6 and 6.0-16.4 μg m(-3) in terms of 5th-95th percentiles. Analysis of the results shows that the use of point air quality measurements for exposure assessment will not explain the intra- and inter-variability of individuals' exposure levels. The methodology developed and implemented in this work provides time-sequence of the exposure events thus making possible association of the exposure with the individual activities and delivers main statistics on individual's air pollution exposure with high spatio-temporal resolution.

  9. Nonlinear modeling of ferroelectric-ferromagnetic composites based on condensed and finite element approaches (Presentation Video)

    Science.gov (United States)

    Ricoeur, Andreas; Lange, Stephan; Avakian, Artjom

    2015-04-01

    Magnetoelectric (ME) coupling is an inherent property of only a few crystals exhibiting very low coupling coefficients at low temperatures. On the other hand, these materials are desirable due to many promising applications, e.g. as efficient data storage devices or medical or geophysical sensors. Efficient coupling of magnetic and electric fields in materials can only be achieved in composite structures. Here, ferromagnetic (FM) and ferroelectric (FE) phases are combined e.g. including FM particles in a FE matrix or embedding fibers of the one phase into a matrix of the other. The ME coupling is then accomplished indirectly via strain fields exploiting magnetostrictive and piezoelectric effects. This requires a poling of the composite, where the structure is exposed to both large magnetic and electric fields. The efficiency of ME coupling will strongly depend on the poling process. Besides the alignment of local polarization and magnetization, it is going along with cracking, also being decisive for the coupling properties. Nonlinear ferroelectric and ferromagnetic constitutive equations have been developed and implemented within the framework of a multifield, two-scale FE approach. The models are microphysically motivated, accounting for domain and Bloch wall motions. A second, so called condensed approach is presented which doesn't require the implementation of a spatial discretisation scheme, however still considering grain interactions and residual stresses. A micromechanically motivated continuum damage model is established to simulate degradation processes. The goal of the simulation tools is to predict the different constitutive behaviors, ME coupling properties and lifetime of smart magnetoelectric devices.

  10. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel.

    Science.gov (United States)

    Li, Xianfeng; Murthy, N Sanjeeva; Becker, Matthew L; Latour, Robert A

    2016-06-24

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications.

  11. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel

    Science.gov (United States)

    Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.

    2016-01-01

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229

  12. Inference Based on the Best-Fitting Model can Contribute to the Replication Crisis: Assessing Model Selection Uncertainty Using a Bootstrap Approach

    Science.gov (United States)

    Lubke, Gitta H.; Campbell, Ian

    2016-01-01

    Inference and conclusions drawn from model fitting analyses are commonly based on a single “best-fitting” model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data. PMID:28663687

  13. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  14. New approaches to infection prevention and control: implementing a risk-based model regionally.

    Science.gov (United States)

    Wale, Martin; Kibsey, Pamela; Young, Lisa; Dobbyn, Beverly; Archer, Jana

    2016-06-01

    Infectious disease outbreaks result in substantial inconvenience to patients and disruption of clinical activity. Between 1 April 2008 and 31 March 2009, the Vancouver Island Health Authority (Island Health) declared 16 outbreaks of Vancomycin Resistant Enterococci and Clostridium difficile in acute care facilities. As a result, infection prevention and control became one of Island Health's highest priorities. Quality improvement methodology, which promotes a culture of co-production between front-line staff, physicians and Infection Control Practitioners, was used to develop and test a bundle of changes in practices. A series of rapid Plan-Do-Study-Act cycles, specific to decreasing hospital-acquired infections, were undertaken by a community hospital, selected for its size, clinical specialty representation, and enthusiasm amongst staff and physicians for innovation and change. Positive results were incorporated into practice at the test site, and then introduced throughout the rest of the Health Authority. The changes implemented as a result of this study have enabled better control of antibiotic resistant organisms and have minimized disruption to routine activity, as well as saving an estimated $6.5 million per annum. When outbreaks do occur, they are now controlled much more promptly, even in existing older facilities. Through this process, we have changed our approach in Infection Prevention and Control (IPAC) from a rules-based approach to one that is risk-based, focusing attention on identifying and managing high-risk situations. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  15. An agent-based approach to modelling the effects of extreme events on global food prices

    Science.gov (United States)

    Schewe, Jacob; Otto, Christian; Frieler, Katja

    2015-04-01

    Extreme climate events such as droughts or heat waves affect agricultural production in major food producing regions and therefore can influence the price of staple foods on the world market. There is evidence that recent dramatic spikes in grain prices were at least partly triggered by actual and/or expected supply shortages. The reaction of the market to supply changes is however highly nonlinear and depends on complex and interlinked processes such as warehousing, speculation, and export restrictions. Here we present for the first time an agent-based modelling framework that accounts, in simplified terms, for these processes and allows to estimate the reaction of world food prices to supply shocks on a short (monthly) timescale. We test the basic model using observed historical supply, demand, and price data of wheat as a major food grain. Further, we illustrate how the model can be used in conjunction with biophysical crop models to assess the effect of future changes in extreme event regimes on the volatility of food prices. In particular, the explicit representation of storage dynamics makes it possible to investigate the potentially nonlinear interaction between simultaneous extreme events in different food producing regions, or between several consecutive events in the same region, which may both occur more frequently under future global warming.

  16. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  17. Efficiency evaluation of a small number of DMUs: an approach based on Li and Reeves's model

    Directory of Open Access Journals (Sweden)

    João Carlos Correia Baptista Soares de Mello

    2009-04-01

    Full Text Available This paper deals with the evaluation of Decision Making Units (DMU when their number is not large enough to allow the use of classic Data Envelopment Analysis (DEA models. To do so, we take advantage of the TRIMAP software when used to study the Li and Reeves MultiCriteria DEA (MCDEA model. We introduce an evaluation measure obtained with the integration of one of the objective functions along the weight space. This measure allows the DMUs joint evaluation. This approach is exemplified with numerical data from some Brazilian electrical companies.Este artigo trata da avaliação de Unidades Produtivas (Decision Making Units - DMUs quando seu número é inferior ao recomendado na Análise Envoltória de Dados (Data Envelopment Analysis - DEA. Para isso é explorado o uso do software TRIMAP no modelo MCDEA (MultiCriteria DEA de Li e Reeves. É proposto um índice de avaliação de desempenho baseado nos valores assumidos por uma das funções objetivo do modelo MCDEA. Estes valores, obtidos pelo TRIMAP, são integrados ao longo de todo o espaço dos pesos. O índice obtido permite uma avaliação de conjunto das DMUs avaliadas. O modelo é ilustrado com um exemplo numérico de avaliação de empresas distribuidoras de energia elétrica.

  18. A practical model-based statistical approach for generating functional test cases: application in the automotive industry

    OpenAIRE

    Awédikian , Roy; Yannou , Bernard

    2012-01-01

    International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...

  19. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    Science.gov (United States)

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  20. Modeling and optimization of ultrasonic metal welding on dissimilar sheets using fuzzy based genetic algorithm approach

    Directory of Open Access Journals (Sweden)

    Mantra Prasad Satpathy

    2015-12-01

    Full Text Available Ultrasonic welding has been used in the market over the past twenty years and serving to the manufacturing industries like aviation, medical, microelectronics and many more due to various hurdles faced by conventional fusion welding process. It takes very short time (less than one second to weld materials, thus it can be used for mass production. But many times, the problems faced by industries due to this process are the poor weld quality and strength of the joints. In fact, the quality and success of the welding depend upon its control parameters. In this present study, the control parameters like vibration amplitude, weld pressure and weld time are considered for the welding of dissimilar metals like aluminum (AA1100 and brass (UNS C27000 sheet of 0.3 mm thickness. Experiments are conducted according to the full factorial design with four replications to obtain the responses like tensile shear stress, T-peel stress and weld area. All these data are utilized to develop a non-linear second order regression model between the responses and predictors. As the quality is an important issue in these manufacturing industries, the optimal combinations of these process parameters are found out by using fuzzy logic approach and genetic algorithm (GA approach. During experiments, the temperature measurement of the weld zone has also been performed to study its effect on different quality characteristics. From the confirmatory test, it has been observed that, the fuzzy logic yields better output results than GA. A variety of weld quality levels, such as “under weld”, “good weld” and “over weld” have also been defined by performing micro structural analysis.

  1. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  2. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    Science.gov (United States)

    Zabbah, Iman

    2012-01-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  3. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  4. A Petri-Nets Based Unified Modeling Approach for Zachman Framework Cells

    Science.gov (United States)

    Ostadzadeh, S. Shervin; Nekoui, Mohammad Ali

    With a trend toward becoming more and more information based, enterprises constantly attempt to surpass the accomplishments of each other by improving their information activities. In this respect, Enterprise Architecture (EA) has proven to serve as a fundamental concept to accomplish this goal. Enterprise architecture clearly provides a thorough outline of the whole enterprise applications and systems with their relationships to enterprise business goals. To establish such an outline, a logical framework needs to be laid upon the entire information system called Enterprise Architecture Framework (EAF). Among various proposed EAF, Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have critical roles in enterprise management and development. One of the problems faced in using ZF is the lack of formal and verifiable models for its cells. In this paper, we proposed a formal language based on Petri nets in order to obtain verifiable models for all cells in ZF. The presented method helps developers to validate and verify completely integrated business and IT systems which results in improve the effectiveness or efficiency of the enterprise itself.

  5. Modeling near-barrier collisions of heavy ions based on a Langevin-type approach

    Science.gov (United States)

    Karpov, A. V.; Saiko, V. V.

    2017-08-01

    Background: Multinucleon transfer in low-energy nucleus-nucleus collisions is proposed as a method of production of yet-unknown neutron-rich nuclei hardly reachable by other methods. Purpose: Modeling of dynamics of nuclear reactions induced by heavy ions in their full complexity of competing reaction channels remains to be a challenging task. The work is aimed at development of such a model and its application to the analysis of multinucleon transfer in deep inelastic collisions of heavy ions leading, in particular, to formation of neutron-rich isotopes in the vicinity of the N =126 shell closure. Method: Multidimensional dynamical model of nucleus-nucleus collisions based on the Langevin equations has been proposed. It is combined with a statistical model for simulation of de-excitation of primary reaction fragments. The model provides a continuous description of the system evolution starting from the well-separated target and projectile in the entrance channel of the reaction up to the formation of final reaction products. Results: A rather complete set of experimental data available for reactions 136Xe+198Pt,208Pb,209Bi was analyzed within the developed model. The model parameters have been determined. The calculated energy, mass, charge, and angular distributions of reaction products, their various correlations as well as cross sections for production of specific isotopes agree well with the data. On this basis, optimal experimental conditions for synthesizing the neutron-rich nuclei in the vicinity of the N =126 shell were formulated and the corresponding cross sections were predicted. Conclusions: The production yields of neutron-rich nuclei with N =126 weakly depend on the incident energy. At the same time, the corresponding angular distributions are strongly energy dependent. They are peaked at grazing angles for larger energies and extend up to the forward angles at low near-barrier collision energies. The corresponding cross sections exceed 100 nb for

  6. Evaluating potential non-point source loading of PAHs from contaminated soils: a fugacity-based modeling approach.

    Science.gov (United States)

    Luo, Xiaolin; Zheng, Yi; Lin, Zhongrong; Wu, Bin; Han, Feng; Tian, Yong; Zhang, Wei; Wang, Xuejun

    2015-01-01

    Soils contaminated by Polycyclic Aromatic Hydrocarbons (PAHs) are subject to significant non-point source (NPS) pollution during rainfall events. Recent studies revealed that the classic enrichment ratio (ER) approach may not be applicable to PAHs. This study developed a model to estimate the ER of PAHs which innovatively applies the fugacity concept. The ER model has been validated with experimental data, which suggested that the transport of PAHs not only depends on their physicochemical properties, but on the sediment composition and how the composition evolves during the event. The modeling uncertainty was systematically examined, and found to be highly compound-dependent. Based on the ER model, a strategy was proposed to practically evaluate the potential NPS loading of PAHs in watersheds with heterogeneous soils. The study results have important implications to modeling and managing the NPS pollution of PAHs (or other chemicals alike) at a watershed scale. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Sensory neural pathways revisited to unravel the temporal dynamics of the Simon effect: A model-based cognitive neuroscience approach.

    Science.gov (United States)

    Salzer, Yael; de Hollander, Gilles; Forstmann, Birte U

    2017-06-01

    The Simon task is one of the most prominent interference tasks and has been extensively studied in experimental psychology and cognitive neuroscience. Despite years of research, the underlying mechanism driving the phenomenon and its temporal dynamics are still disputed. Within the framework of the review, we adopt a model-based cognitive neuroscience approach. We first go over key findings in the literature of the Simon task, discuss competing qualitative cognitive theories and the difficulty of testing them empirically. We then introduce sequential sampling models, a particular class of mathematical cognitive process models. Finally, we argue that the brain architecture accountable for the processing of spatial ('where') and non-spatial ('what') information, could constrain these models. We conclude that there is a clear need to bridge neural and behavioral measures, and that mathematical cognitive models may facilitate the construction of this bridge and work towards revealing the underlying mechanisms of the Simon effect. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Consequence Based Design. An approach for integrating computational collaborative models (Integrated Dynamic Models) in the building design phase

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    to visualize any relevant building performance. AHP – the use of Analytic Hierarchy Process to clarify differences between solutions on both qualitative and quantitative evaluations. Termite – the implementation of the BPS tool solver Be10 as a plugin for Grasshopper that enables live feedback of entire...... building energy consumption. HQSS – a quasi-steady-state BPS tool solver dedicated for fast thermal analyses in Grasshopper. Moth – an agent-based optimization algorithm implemented in Grasshopper that attempts to combine qualitative and quantitative evaluations during optimization. Sentient models...... – a method to listen to user behaviour in Grasshopper and decrease the space of solutions. Surrogate models – a test of machine learning methods to speed up any BPS feedback through surrogate models with Grasshopper. This thesis demonstrates how integrated dynamic models may include building performance...

  9. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    Science.gov (United States)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  10. A model for measuring research capacity using an intellectual capital-based approach in a colombian higher education institution

    Directory of Open Access Journals (Sweden)

    Jenny Marcela Sánchez-Torres

    2009-12-01

    Full Text Available This article’s main objective was to present a model for measuring research capacity from an intellectual capital-based approach for Colombian higher education institutions (Instituciones de Educación Superior – IES, forming part of the national science and technology system, in the sense that around 90% of Colombian research groups belong to it. The model should lead to identifying IES capacity and competence and to strengthening these institutions’ management ability with the aim of obtaining input facilitating designing and formulating science, technology and innovation policy. Likewise, it should contribute towards strengthening IES relationships within national and international public and private settings.

  11. Modelling organic aerosols over Europe: application and testingof a UNIFAC-based approach

    Science.gov (United States)

    Simpson, D.; Makar, P.; Vestreng, V.

    2003-04-01

    The formation of secondary organic aerosols (SOA) in ambient air depends on a number of factors, including: (1) emissions of primary organic carbon (OC), (2) emissions of precursor VOC (both biogenic and anthropogenic), (3) the formation of condensible compounds through atmospheric chemistry, and (4) the ensuing gas-particle partitioning of these compounds. Factors (3) and (4) are the least understood of these, although great progress has been made in smog-chamber studies at least. This study address the relative importance of all of these factors for atmospheric conditions through the application of the EMEP MSC-W regional transport model over Europe. Previous modelling of SOA over European made use of the Lagrangian EMEP model (Andersson-Sköld and Simpson, 2000) which suffers from a low horizontal resolution (150x150 km2) and, more seriously, from a one-layer formulation. This earlier work also made the assumption that activity coefficients for SOA compounds were unity; an assumption which may sometimes be acceptable (e.g. Seinfeld et al., 2002) but which is not always adequate and requires investigation for ambient modelling conditions. This study reports on the results of a new and much more detailed set of calculations. Three major improvements have been implemented. Firstly, we have made use of the new EMEP Eulerian model (Simpson et al., 2002), which has a horizontal resolution of 50x50 km2 and 20 vertical layers. Secondly, emissions of primary OC are estimated based upon available PM2.5 inventories and a new evaluation of those VOC species which are potentially important in SOA formation (Makar et al., 2003). Thirdly, the UNIFAC group-contribution method (Sandler, 1999, Makar et al., 2003) is used to estimate the activity coefficients of the aerosol components and thus provide a more rigorous treatment of the gas-particle partitioning. References Andersson-Sköld, Y., and Simpson, D., 2001, Secondary organic aerosol formation in Northern Europe: a model

  12. Linking Resource-Based Strategies to Customer-Focused Performance for Professional Services: A Structural Equation Modelling Approach

    Directory of Open Access Journals (Sweden)

    Ming-Lu Wu

    2013-12-01

    Full Text Available This paper links professional service firms’ resource-based strategies to their customer-focused performance for formulating service quality improvement priorities. The research applies the structural equation modelling approach to survey data from Hong Kong construction consultants to test some hypotheses. The study validates the various measures of firms’ resource-based strategies and customer-focused performance and bridges the gaps in firms’ organizational learning, core competences and customer-focused performance mediated by their strategic flexibility. The research results have practical implications for professional service firms to deploy resources appropriately to first enhance different competences and then improve customerfocused performance using their different competences.

  13. Robust Initialization of Active Shape Models for Lung Segmentation in CT Scans: A Feature-Based Atlas Approach

    Directory of Open Access Journals (Sweden)

    Gurman Gill

    2014-01-01

    Full Text Available Model-based segmentation methods have the advantage of incorporating a priori shape information into the segmentation process but suffer from the drawback that the model must be initialized sufficiently close to the target. We propose a novel approach for initializing an active shape model (ASM and apply it to 3D lung segmentation in CT scans. Our method constructs an atlas consisting of a set of representative lung features and an average lung shape. The ASM pose parameters are found by transforming the average lung shape based on an affine transform computed from matching features between the new image and representative lung features. Our evaluation on a diverse set of 190 images showed an average dice coefficient of 0.746 ± 0.068 for initialization and 0.974 ± 0.017 for subsequent segmentation, based on an independent reference standard. The mean absolute surface distance error was 0.948 ± 1.537 mm. The initialization as well as segmentation results showed a statistically significant improvement compared to four other approaches. The proposed initialization method can be generalized to other applications employing ASM-based segmentation.

  14. Impacts of radiation exposure on the experimental microbial ecosystem: a particle-based model simulation approach

    International Nuclear Information System (INIS)

    Doi, M.; Tanaka, N.; Fuma, S.; Kawabata, Z.

    2004-01-01

    Well-designed experimental model ecosystem could be a simple reference of the actual environment and complex ecological systems. For ecological toxicity test of radiation and other environmental toxicants, we investigated and aquatic microbial ecosystem (closed microcosm) in the test tube with initial substrates,autotroph flagellate algae (Euglena, G.), heterotroph ciliate protozoa (Tetrahymena T.) and saprotroph bacteria (E, coli). These species organizes by itself to construct the ecological system, that keeps the sustainable population dynamics for more than 2 years after inoculation only by adding light diurnally and controlling temperature at 25 degree Celsius. Objective of the study is to develop the particle-based computer simulation by reviewing interactions among microbes and environment, and analyze the ecological toxicities of radiation on the microcosm by replicating experimental results in the computer simulation. (Author) 14 refs

  15. A Social Approach to Rule Dynamics Using an Agent-Based Model.

    Science.gov (United States)

    Cuskley, Christine; Loreto, Vittorio; Kirby, Simon

    2018-03-08

    A well-trod debate at the nexus of cognitive science and linguistics, the so-called past tense debate, has examined how rules and exceptions are individually acquired (McClelland & Patterson, ; Pinker & Ullman, ). However, this debate focuses primarily on individual mechanisms in learning, saying little about how rules and exceptions function from a sociolinguistic perspective. To remedy this, we use agent-based models to examine how rules and exceptions function across populations. We expand on earlier work by considering how repeated interaction and cultural transmission across speakers affects the dynamics of rules and exceptions in language, measuring linguistic outcomes within a social system rather than focusing individual learning outcomes. We consider how population turnover and growth effect linguistic rule dynamics in large and small populations, showing that this method has considerable potential particularly in probing the mechanisms underlying the linguistic niche hypothesis (Lupyan & Dale, ). © 2018 Cognitive Science Society, Inc.

  16. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach.

    Science.gov (United States)

    Zhang, Hongxia; Tang, Weihai; Liu, Xiping

    2017-01-01

    Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM). This study investigated how EBPM performance is affected by task duration by having university students ( n = 223) perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task's duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM) cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1) Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2) As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3) The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance.

  17. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongxia Zhang

    2017-11-01

    Full Text Available Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM. This study investigated how EBPM performance is affected by task duration by having university students (n = 223 perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1 Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2 As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3 The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance.

  18. A Tucker model based approach for analysis of complex oil biodegradation data.

    Science.gov (United States)

    Tomasi, Giorgio; Christensen, Jan H

    2009-11-06

    A novel method based on gas chromatography-mass spectrometry in selected ion monitoring mode (GC-MS/SIM) and Tucker models is developed to evaluate the effects of oil type, microbial treatments and incubation time on the biodegradation of petroleum hydrocarbons. The data set consists of sections of the m/z 180, 192 and 198 GC-MS/SIM chromatograms of oil extracts from a biodegradation experiment where four oil types were exposed to four microbial treatments over a period of one year. The chosen sections, which are specific to methylfluorenes, phenanthrenes and dibenzothiophenes, were combined in a 4-way array (incubation timexoil typextreatmentxcombined chromatographic retention times) that was analyzed using both principal component analysis and the Tucker model. Several conclusions could be reached: the light fuel oil was the least degradable of those tested, 2- and 3-methyl isomers were more easily degraded compared to the 4-methyl isomers, the mixture of surfactant producers and PAC degraders provided the most effective degradation and the largest part of the degradation occurred between 54 and 132 days.

  19. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Shuhei Kaneko

    2015-01-01

    Full Text Available In the past decade, researchers in oncology have sought to develop survival prediction models using gene expression data. The least absolute shrinkage and selection operator (lasso has been widely used to select genes that truly correlated with a patient’s survival. The lasso selects genes for prediction by shrinking a large number of coefficients of the candidate genes towards zero based on a tuning parameter that is often determined by a cross-validation (CV. However, this method can pass over (or fail to identify true positive genes (i.e., it identifies false negatives in certain instances, because the lasso tends to favor the development of a simple prediction model. Here, we attempt to monitor the identification of false negatives by developing a method for estimating the number of true positive (TP genes for a series of values of a tuning parameter that assumes a mixture distribution for the lasso estimates. Using our developed method, we performed a simulation study to examine its precision in estimating the number of TP genes. Additionally, we applied our method to a real gene expression dataset and found that it was able to identify genes correlated with survival that a CV method was unable to detect.

  20. Hydrological modelling over different scales on the edge of the permafrost zone: approaching model realism based on experimentalists' knowledge

    Science.gov (United States)

    Nesterova, Natalia; Makarieva, Olga; Lebedeva, Lyudmila

    2017-04-01

    Quantitative and qualitative experimentalists' data helps to advance both understanding of the runoff generation and modelling strategies. There is significant lack of such information for the dynamic and vulnerable cold regions. The aim of the study is to make use of historically collected experimental hydrological data for modelling poorly-gauged river basins on larger scales near the southern margin of the permafrost zone in Eastern Siberia. Experimental study site "Mogot" includes the Nelka river (30.8 km2) and its three tributaries with watersheds area from 2 to 5.8 km2. It is located in the upper elevated (500 - 1500 m a.s.l.) part of the Amur River basin. Mean annual temperature and precipitation are -7.5°C and 555 mm respectively. Top of the mountains with weak vegetation has well drained soil that prevents any water accumulation. Larch forest on the northern slopes has thick organic layer. It causes shallow active layer and relatively small subsurface water storage. Soil in the southern slopes has thinner organic layer and thaws up to 1.6 m depth. Flood plains are the wettest landscape with highest water storage capacity. Measured monthly evaporation varies from 9 to 100 mm through the year. Experimental data shows importance of air temperature and precipitation changes with the elevation. Their gradient was taken into account for hydrological simulations. Model parameterization was developed according to available quantitative and qualitative data in the Mogot station. The process-based hydrological Hydrograph model was used in the study. It explicitly describes hydrological processes in different permafrost environments. Flexibility of the Hydrograph model allows take advantage from the experimental data for model set-up. The model uses basic meteorological data as input. The level of model complexity is suitable for a remote, sparsely gauged region such as Southern Siberia as it allows for a priori assessment of the model parameters. Model simulation

  1. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  2. A multicriteria model for ranking of improvement approaches in construction companies based on the PROMETHÉE II method

    Directory of Open Access Journals (Sweden)

    Renata Maciel de Melo

    2015-03-01

    Full Text Available The quality of the construction production process may be improved using several different methods such as Lean Construction, ISO 9001, ISO 14001 or ISO 18001. Construction companies need a preliminary study and systematic implementation of changes to become more competitive and efficient. This paper presents a multicriteria decision model for the selection and ranking of such alternatives for improvement approaches regarding the aspects of quality, sustainability and safety, based on the PROMETHEE II method. The adoption of this model provides more confidence and visibility for decision makers. One of the differentiators of this model is the use of a fragmented set of improvement alternatives. These alternatives were combined with some restrictions to create a global set of alternatives. An application to three scenarios, considering realistic data, was developed. The results of the application show that the model should be incorporated into the strategic planning process of organizations.

  3. A case for multi-model and multi-approach based event attribution: The 2015 European drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Seneviratne, Sonia Isabelle

    2017-04-01

    Science on the role of anthropogenic influence on extreme weather events such as heat waves or droughts has evolved rapidly over the past years. The approach of "event attribution" compares the occurrence probability of an event in the present, factual world with the probability of the same event in a hypothetical, counterfactual world without human-induced climate change. Every such analysis necessarily faces multiple methodological choices including, but not limited to: the event definition, climate model configuration, and the design of the counterfactual world. Here, we explore the role of such choices for an attribution analysis of the 2015 European summer drought (Hauser et al., in preparation). While some GCMs suggest that anthropogenic forcing made the 2015 drought more likely, others suggest no impact, or even a decrease in the event probability. These results additionally differ for single GCMs, depending on the reference used for the counterfactual world. Observational results do not suggest a historical tendency towards more drying, but the record may be too short to provide robust assessments because of the large interannual variability of drought occurrence. These results highlight the need for a multi-model and multi-approach framework in event attribution research. This is especially important for events with low signal to noise ratio and high model dependency such as regional droughts. Hauser, M., L. Gudmundsson, R. Orth, A. Jézéquel, K. Haustein, S.I. Seneviratne, in preparation. A case for multi-model and multi-approach based event attribution: The 2015 European drought.

  4. A pharmacoeconomic modeling approach to estimate a value-based price for new oncology drugs in Europe.

    Science.gov (United States)

    Dranitsaris, George; Ortega, Ana; Lubbe, Martie S; Truter, Ilse

    2012-03-01

    Several European governments have recently mandated price cuts in drugs to reduce health care spending. However, such measures without supportive evidence may compromise patient care because manufacturers may withdraw current products or not launch new agents. A value-based pricing scheme may be a better approach for determining a fair drug price and may be a medium for negotiations between the key stakeholders. To demonstrate this approach, pharmacoeconomic (PE) modeling was used from the Spanish health care system perspective to estimate a value-based price for bevacizumab, a drug that provides a 1.4-month survival benefit to patients with metastatic colorectal cancer (mCRC). The threshold used for economic value was three times the Spanish per capita GDP, as recommended by the World Health Organization (WHO). A PE model was developed to simulate outcomes in mCRC patients receiving chemotherapy ± bevacizumab. Clinical data were obtained from randomized trials and costs from a Spanish hospital. Utility estimates were determined by interviewing 24 Spanish oncology nurses and pharmacists. A price per dose of bevacizumab was then estimated using a target threshold of € 78,300 per quality-adjusted life year gained, which is three times the Spanish per capita GDP. For a 1.4-month survival benefit, a price of € 342 per dose would be considered cost effective from the Spanish public health care perspective. The price may be increased to € 733 or € 843 per dose if the drug were able to improve patient quality of life or enhance survival from 1.4 to 3 months. This study demonstrated that a value-based pricing approach using PE modeling and the WHO criteria for economic value is feasible and perhaps a better alternative to government mandated price cuts. The former approach would be a good starting point for opening dialog between European government payers and the pharmaceutical industry.

  5. GIS-Based Fast Moving Landslide Risk Analysis Model Using Qualitative Approach: A Case Study of Balakot, Pakistan

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2011-04-01

    Full Text Available The innovation of this research is the development of new model called fast moving landslide risk analysis model by modifying one of the previous prominent landslide risk algorithms focusing on the fast moving type of the landslides (such as mudslides, mud flows, block slide, rock fall and topple based on the qualitative approach using Heuristic method in GIS (Geographical Information Systems. The different event controlling parameters and criteria were used for fast moving landslide predictive risk model. The pair wise comparison method was used in which the parameters of landslide hazard and vulnerability were compared by their assigned weights. The drawback of the used approach was justified by the standard value of consistency ratio, which proved the assigned weight of the parameters as reasonable and consistent. The model was validated by using the occurred landslides inventory data and correlation coefficient test, which showed the positive relationship between the landslide risk predicted regions and the occurred landslides locations. The various landslide events occurred on 8th October, 2005 were accumulated as landslide inventory by the interpretation of satellite imagery. The validation of the model was justified by using one of the statistical two paired, \\"t\\" test, and the amount of the predicted risk in the different regions. It is believed that this modified model will prove beneficial to the decision makers in future.

  6. A logic-based dynamic modeling approach to explicate the evolution of the central dogma of molecular biology.

    Directory of Open Access Journals (Sweden)

    Mohieddin Jafari

    Full Text Available It is nearly half a century past the age of the introduction of the Central Dogma (CD of molecular biology. This biological axiom has been developed and currently appears to be all the more complex. In this study, we modified CD by adding further species to the CD information flow and mathematically expressed CD within a dynamic framework by using Boolean network based on its present-day and 1965 editions. We show that the enhancement of the Dogma not only now entails a higher level of complexity, but it also shows a higher level of robustness, thus far more consistent with the nature of biological systems. Using this mathematical modeling approach, we put forward a logic-based expression of our conceptual view of molecular biology. Finally, we show that such biological concepts can be converted into dynamic mathematical models using a logic-based approach and thus may be useful as a framework for improving static conceptual models in biology.

  7. Assessment of Safety and Functional Efficacy of Stem Cell-Based Therapeutic Approaches Using Retinal Degenerative Animal Models

    Directory of Open Access Journals (Sweden)

    Tai-Chi Lin

    2017-01-01

    Full Text Available Dysfunction and death of retinal pigment epithelium (RPE and or photoreceptors can lead to irreversible vision loss. The eye represents an ideal microenvironment for stem cell-based therapy. It is considered an “immune privileged” site, and the number of cells needed for therapy is relatively low for the area of focused vision (macula. Further, surgical placement of stem cell-derived grafts (RPE, retinal progenitors, and photoreceptor precursors into the vitreous cavity or subretinal space has been well established. For preclinical tests, assessments of stem cell-derived graft survival and functionality are conducted in animal models by various noninvasive approaches and imaging modalities. In vivo experiments conducted in animal models based on replacing photoreceptors and/or RPE cells have shown survival and functionality of the transplanted cells, rescue of the host retina, and improvement of visual function. Based on the positive results obtained from these animal experiments, human clinical trials are being initiated. Despite such progress in stem cell research, ethical, regulatory, safety, and technical difficulties still remain a challenge for the transformation of this technique into a standard clinical approach. In this review, the current status of preclinical safety and efficacy studies for retinal cell replacement therapies conducted in animal models will be discussed.

  8. It's the parameters, stupid! Moving beyond multi-model and multi-physics approaches to characterize and reduce predictive uncertainty in process-based hydrological models

    Science.gov (United States)

    Clark, Martyn; Samaniego, Luis; Freer, Jim

    2014-05-01

    Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic

  9. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  10. MODELLING TEMPORAL SCHEDULE OF URBAN TRAINS USING AGENT-BASED SIMULATION AND NSGA2-BASED MULTIOBJECTIVE OPTIMIZATION APPROACHES

    Directory of Open Access Journals (Sweden)

    M. Sahelgozin

    2015-12-01

    Full Text Available Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  11. Modelling Temporal Schedule of Urban Trains Using Agent-Based Simulation and NSGA2-BASED Multiobjective Optimization Approaches

    Science.gov (United States)

    Sahelgozin, M.; Alimohammadi, A.

    2015-12-01

    Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO) problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA) that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  12. Modeling prosody: Different approaches

    Science.gov (United States)

    Carmichael, Lesley M.

    2002-11-01

    Prosody pervades all aspects of a speech signal, both in terms of raw acoustic outcomes and linguistically meaningful units, from the phoneme to the discourse unit. It is carried in the suprasegmental features of fundamental frequency, loudness, and duration. Several models have been developed to account for the way prosody organizes speech, and they vary widely in terms of their theoretical assumptions, organizational primitives, actual procedures of application to speech, and intended use (e.g., to generate speech from text vs. to model the prosodic phonology of a language). In many cases, these models overtly contradict one another with regard to their fundamental premises or their identification of the perceptible objects of linguistic prosody. These competing models are directly compared. Each model is applied to the same speech samples. This parallel analysis allows for a critical inspection of each model and its efficacy in assessing the suprasegmental behavior of the speech. The analyses illustrate how different approaches are better equipped to account for different aspects of prosody. Viewing the models and their successes from an objective perspective allows for creative possibilities in terms of combining strengths from models which might otherwise be considered fundamentally incompatible.

  13. Fractional Partial Differential Equation: Fractional Total Variation and Fractional Steepest Descent Approach-Based Multiscale Denoising Model for Texture Image

    Directory of Open Access Journals (Sweden)

    Yi-Fei Pu

    2013-01-01

    Full Text Available The traditional integer-order partial differential equation-based image denoising approaches often blur the edge and complex texture detail; thus, their denoising effects for texture image are not very good. To solve the problem, a fractional partial differential equation-based denoising model for texture image is proposed, which applies a novel mathematical method—fractional calculus to image processing from the view of system evolution. We know from previous studies that fractional-order calculus has some unique properties comparing to integer-order differential calculus that it can nonlinearly enhance complex texture detail during the digital image processing. The goal of the proposed model is to overcome the problems mentioned above by using the properties of fractional differential calculus. It extended traditional integer-order equation to a fractional order and proposed the fractional Green’s formula and the fractional Euler-Lagrange formula for two-dimensional image processing, and then a fractional partial differential equation based denoising model was proposed. The experimental results prove that the abilities of the proposed denoising model to preserve the high-frequency edge and complex texture information are obviously superior to those of traditional integral based algorithms, especially for texture detail rich images.

  14. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  16. Population Density and Moment-based Approaches to Modeling Domain Calcium-mediated Inactivation of L-type Calcium Channels.

    Science.gov (United States)

    Wang, Xiao; Hardcastle, Kiah; Weinberg, Seth H; Smith, Gregory D

    2016-03-01

    We present a population density and moment-based description of the stochastic dynamics of domain [Formula: see text]-mediated inactivation of L-type [Formula: see text] channels. Our approach accounts for the effect of heterogeneity of local [Formula: see text] signals on whole cell [Formula: see text] currents; however, in contrast with prior work, e.g., Sherman et al. (Biophys J 58(4):985-995, 1990), we do not assume that [Formula: see text] domain formation and collapse are fast compared to channel gating. We demonstrate the population density and moment-based modeling approaches using a 12-state Markov chain model of an L-type [Formula: see text] channel introduced by Greenstein and Winslow (Biophys J 83(6):2918-2945, 2002). Simulated whole cell voltage clamp responses yield an inactivation function for the whole cell [Formula: see text] current that agrees with the traditional approach when domain dynamics are fast. We analyze the voltage-dependence of [Formula: see text] inactivation that may occur via slow heterogeneous domain [[Formula: see text

  17. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    by a rooted, directed graph where each node without successor is an alternative. We formulate a family of MEV models as dynamic discrete choice models on graphs of correlation structures and show that the dynamic models are consistent with MEV theory and generalize the network MEV model (Daly and Bierlaire......We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined...

  18. Predictive Modeling for Blood Transfusion Following Adult Spinal Deformity Surgery: A Tree-Based Machine Learning Approach.

    Science.gov (United States)

    Durand, Wesley M; DePasse, J Mason; Daniels, Alan H

    2017-12-05

    Retrospective cohort study. Blood transfusion is frequently necessary following adult spinal deformity (ASD) surgery. We sought to develop predictive models for blood transfusion following ASD surgery, utilizing both classification tree and random forest machine-learning approaches. Past models for transfusion risk among spine surgery patients are disadvantaged through use of single-institutional data, potentially limiting generalizability. This investigation was conducted utilizing the ACS NSQIP dataset years 2012-2015. Patients undergoing surgery for ASD were identified using primary-listed CPT codes. In total, 1,029 patients were analyzed. The primary outcome measure was intra-/post-operative blood transfusion. Patients were divided into training (n = 824) and validation (n = 205) datasets. Single classification tree and random forest models were developed. Both models were tested on the validation dataset using AUC, which was compared between models. Overall, 46.5% (n = 479) of patients received a transfusion intraoperatively or within 72 h postoperatively. The final classification tree model utilized operative duration, hematocrit, and weight, exhibiting AUC = 0.79 (95%CI 0.73-0.85) on the validation set. The most influential variables in the random forest model were operative duration, surgical invasiveness, hematocrit, weight, and age. The random forest model exhibited AUC = 0.85 (95%CI 0.80-0.90). The difference between the classification tree and random forest AUCs was non-significant at the validation cohort size of 205 patients (p = 0.1551). This investigation produced tree-based machine-learning models of blood transfusion risk following ASD surgery. The random forest model offered very good predictive capability as measured by AUC. Our single classification tree model offered superior ease of implementation, but a lower AUC as compared to the random forest approach, though this difference was not statistically significant at

  19. Modeling the Impact of School-Based Universal Depression Screening on Additional Service Capacity Needs: A System Dynamics Approach.

    Science.gov (United States)

    Lyon, Aaron R; Maras, Melissa A; Pate, Christina M; Igusa, Takeru; Vander Stoep, Ann

    2016-03-01

    Although it is widely known that the occurrence of depression increases over the course of adolescence, symptoms of mood disorders frequently go undetected. While schools are viable settings for conducting universal screening to systematically identify students in need of services for common health conditions, particularly those that adversely affect school performance, few school districts routinely screen their students for depression. Among the most commonly referenced barriers are concerns that the number of students identified may exceed schools' service delivery capacities, but few studies have evaluated this concern systematically. System dynamics (SD) modeling may prove a useful approach for answering questions of this sort. The goal of the current paper is therefore to demonstrate how SD modeling can be applied to inform implementation decisions in communities. In our demonstration, we used SD modeling to estimate the additional service demand generated by universal depression screening in a typical high school. We then simulated the effects of implementing "compensatory approaches" designed to address anticipated increases in service need through (1) the allocation of additional staff time and (2) improvements in the effectiveness of mental health interventions. Results support the ability of screening to facilitate more rapid entry into services and suggest that improving the effectiveness of mental health services for students with depression via the implementation of an evidence-based treatment protocol may have a limited impact on overall recovery rates and service availability. In our example, the SD approach proved useful in informing systems' decision-making about the adoption of a new school mental health service.

  20. Development of a Novel, Parsimonious, Model-based Approach for Representing High-resolution Gravel Facies

    Science.gov (United States)

    Burrows, N.; Entwistle, N. S.; Heritage, G. L.

    2014-12-01

    A precise, time-efficient, cost-effective method for quantifying riverbed roughness and sediment size distribution has hitherto eluded river scientists. Traditional techniques (e.g., Wolman counts) have high potential for error brought about by operator bias and subjectivity when presented with complex facies assemblages, poor spatial coverage, insufficient sample sizes, and misrepresentation of bedforms. The application of LiDAR facilitated accurate observation of micro-scale habitats, and has been successfully employed in quantifying sediment grain size at the local level. However, despite considerable success of LiDAR instruments in remotely sensing riverine landscapes, and the obvious benefits they offer - very high spatial and temporal resolution, rapid data acquisition, and minimal disturbance in the field - procurement of these apparatus and their respective computer software comes at high financial cost, and extensive user training is generally necessary in order to operate such devices. Recent developments in computer software have led to advancements in digital photogrammetry over a broad range of scales, with Structure from Motion (SfM) techniques enabling production of precise DEMs based on point-clouds analogous to, and even denser than, those produced by LiDAR, at significantly reduced cost and complexity during post-processing. This study has employed both an SfM-photogrammetry and Terrestrial Laser Scanning (TLS) approach in a comparative analysis of sediment grain size, where LiDAR-derived data has previously provided a reliable reference of grain size. Total Station EDM theodolite provided the parent coordinate system for both SfM and meshing of TLS point-clouds. For each data set, a 0.19 m moving window (consistent with the largest sediment clast b axis) was applied to the resulting point-clouds. Two times standard deviation of elevation was calculated in order to provide a surrogate measure of grain protrusion, from which sediment frequency

  1. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  2. A population-based Bayesian approach to the minimal model of glucose and insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    The minimal model was proposed in the late 1970s by Bergman et al. as a powerful model consisting of three differential equations describing the glucose and insulin kinetics of a single individual. Considering the glucose and insulin simultaneously, the minimal model is a highly ill-posed estimat...

  3. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  4. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  5. Prediction models in the design of neural network based ECG classifiers: A neural network and genetic programming approach

    Directory of Open Access Journals (Sweden)

    Smith Ann E

    2002-01-01

    Full Text Available Abstract Background Classification of the electrocardiogram using Neural Networks has become a widely used method in recent years. The efficiency of these classifiers depends upon a number of factors including network training. Unfortunately, there is a shortage of evidence available to enable specific design choices to be made and as a consequence, many designs are made on the basis of trial and error. In this study we develop prediction models to indicate the point at which training should stop for Neural Network based Electrocardiogram classifiers in order to ensure maximum generalisation. Methods Two prediction models have been presented; one based on Neural Networks and the other on Genetic Programming. The inputs to the models were 5 variable training parameters and the output indicated the point at which training should stop. Training and testing of the models was based on the results from 44 previously developed bi-group Neural Network classifiers, discriminating between Anterior Myocardial Infarction and normal patients. Results Our results show that both approaches provide close fits to the training data; p = 0.627 and p = 0.304 for the Neural Network and Genetic Programming methods respectively. For unseen data, the Neural Network exhibited no significant differences between actual and predicted outputs (p = 0.306 while the Genetic Programming method showed a marginally significant difference (p = 0.047. Conclusions The approaches provide reverse engineering solutions to the development of Neural Network based Electrocardiogram classifiers. That is given the network design and architecture, an indication can be given as to when training should stop to obtain maximum network generalisation.

  6. Towards a dynamic assessment of raw materials criticality: Linking agent-based demand — With material flow supply modelling approaches

    International Nuclear Information System (INIS)

    Knoeri, Christof; Wäger, Patrick A.; Stamp, Anna; Althaus, Hans-Joerg; Weil, Marcel

    2013-01-01

    Emerging technologies such as information and communication-, photovoltaic- or battery technologies are expected to increase significantly the demand for scarce metals in the near future. The recently developed methods to evaluate the criticality of mineral raw materials typically provide a ‘snapshot’ of the criticality of a certain material at one point in time by using static indicators both for supply risk and for the impacts of supply restrictions. While allowing for insights into the mechanisms behind the criticality of raw materials, these methods cannot account for dynamic changes in products and/or activities over time. In this paper we propose a conceptual framework intended to overcome these limitations by including the dynamic interactions between different possible demand and supply configurations. The framework integrates an agent-based behaviour model, where demand emerges from individual agent decisions and interaction, into a dynamic material flow model, representing the materials' stocks and flows. Within the framework, the environmental implications of substitution decisions are evaluated by applying life-cycle assessment methodology. The approach makes a first step towards a dynamic criticality assessment and will enhance the understanding of industrial substitution decisions and environmental implications related to critical metals. We discuss the potential and limitation of such an approach in contrast to state-of-the-art methods and how it might lead to criticality assessments tailored to the specific circumstances of single industrial sectors or individual companies. - Highlights: ► Current criticality assessment methods provide a ‘snapshot’ at one point in time. ► They do not account for dynamic interactions between demand and supply. ► We propose a conceptual framework to overcomes these limitations. ► The framework integrates an agent-based behaviour model with a dynamic material flow model. ► The approach proposed makes

  7. A novel approach to model exposure of coastal-marine ecosystems to riverine flood plumes based on remote sensing techniques.

    Science.gov (United States)

    Álvarez-Romero, Jorge G; Devlin, Michelle; Teixeira da Silva, Eduardo; Petus, Caroline; Ban, Natalie C; Pressey, Robert L; Kool, Johnathan; Roberts, Jason J; Cerdeira-Estrada, Sergio; Wenger, Amelia S; Brodie, Jon

    2013-04-15

    Increased loads of land-based pollutants are a major threat to coastal-marine ecosystems. Identifying the affected marine areas and the scale of influence on ecosystems is critical to assess the impacts of degraded water quality and to inform planning for catchment management and marine conservation. Studies using remotely-sensed data have contributed to our understanding of the occurrence and influence of river plumes, and to our ability to assess exposure of marine ecosystems to land-based pollutants. However, refinement of plume modeling techniques is required to improve risk assessments. We developed a novel, complementary, approach to model exposure of coastal-marine ecosystems to land-based pollutants. We used supervised classification of MODIS-Aqua true-color satellite imagery to map the extent of plumes and to qualitatively assess the dispersal of pollutants in plumes. We used the Great Barrier Reef (GBR), the world's largest coral reef system, to test our approach. We combined frequency of plume occurrence with spatially distributed loads (based on a cost-distance function) to create maps of exposure to suspended sediment and dissolved inorganic nitrogen. We then compared annual exposure maps (2007-2011) to assess inter-annual variability in the exposure of coral reefs and seagrass beds to these pollutants. We found this method useful to map plumes and qualitatively assess exposure to land-based pollutants. We observed inter-annual variation in exposure of ecosystems to pollutants in the GBR, stressing the need to incorporate a temporal component into plume exposure/risk models. Our study contributes to our understanding of plume spatial-temporal dynamics of the GBR and offers a method that can also be applied to monitor exposure of coastal-marine ecosystems to plumes and explore their ecological influences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A cloud model based multi-attribute decision making approach for selection and evaluation of groundwater management schemes

    Science.gov (United States)

    Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia

    2017-12-01

    Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.

  9. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  10. Reducing software security risk through an integrated approach research initiative model based verification of the Secure Socket Layer (SSL) Protocol

    Science.gov (United States)

    Powell, John D.

    2003-01-01

    This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.

  11. Modelling and simulation of electrical energy systems through a complex systems approach using agent-based models. Case study: Under-frequency load shedding for refrigerators

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, Enrique [Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany). European Inst. for Energy Research (EIFER); Gonzalez de Durana, Jose Maria; Barambones, Oscar [Universidad del Pais Vasco, Vitoria (Spain). Escuela Universitaria de Ingenieria de Vitoria-Gasteiz

    2013-09-01

    One of the ways of studying complex systems is through modelling and simulation, which are used as tools to represent these systems in a virtual environment. Current advances in computing performance (which has been a major constraint in this field for some time) allow for the simulation these kinds of systems within reasonable time horizons. One of the tools for simulating complex systems is agent-based modelling. This individual-centric approach is based on autonomous entities that can interact with each other, thus modelling the system in a disaggregated way. Agent-based models can be coupled with other modelling methods, such as continuous models and discrete events, which can be embedded or run in parallel to the multi-agent system. When representing the electrical energy system in a systemic and multi-layered way, it is treated as a true socio-technical system, in which not only technical models are taken into account, but also socio-behavioural ones. In this work, a number of different models for the parts of an electrical system are presented, related to production, demand and storage. The models are intended to be as simple as possible in order to be simulated in an integrated framework representing the system as a whole. Furthermore, the models allow the inclusion of social behaviour and other, not purely engineering-related aspects of the system, which have to be considered from a complex point of view. (orig.)

  12. a Merton-Like Approach to Pricing Debt Based on a Non-Gaussian Asset Model

    Science.gov (United States)

    Borland, Lisa; Evnine, Jeremy; Pochart, Benoit

    2005-09-01

    We propose a generalization to Merton's model for evaluating credit spreads. In his original work, a company's assets were assumed to follow a log-normal process. We introduce fat tails and skew into this model, along the same lines as in the option pricing model of Borland and Bouchaud (2004, Quantitative Finance 4) and illustrate the effects of each component. Preliminary empirical results indicate that this model fits well to empirically observed credit spreads with a parameterization that also matched observed stock return distributions and option prices.

  13. Discovery of Novel Inhibitors for Nek6 Protein through Homology Model Assisted Structure Based Virtual Screening and Molecular Docking Approaches

    Directory of Open Access Journals (Sweden)

    P. Srinivasan

    2014-01-01

    Full Text Available Nek6 is a member of the NIMA (never in mitosis, gene A-related serine/threonine kinase family that plays an important role in the initiation of mitotic cell cycle progression. This work is an attempt to emphasize the structural and functional relationship of Nek6 protein based on homology modeling and binding pocket analysis. The three-dimensional structure of Nek6 was constructed by molecular modeling studies and the best model was further assessed by PROCHECK, ProSA, and ERRAT plot in order to analyze the quality and consistency of generated model. The overall quality of computed model showed 87.4% amino acid residues under the favored region. A 3 ns molecular dynamics simulation confirmed that the structure was reliable and stable. Two lead compounds (Binding database ID: 15666, 18602 were retrieved through structure-based virtual screening and induced fit docking approaches as novel Nek6 inhibitors. Hence, we concluded that the potential compounds may act as new leads for Nek6 inhibitors designing.

  14. A Hidden Markov model-based approach in brandswitching (A case ...

    African Journals Online (AJOL)

    In this work, we considered a Hidden Markov Model for the Telecommunication Industry in Nigeria. There are five major mobile service providers presently in Nigeria: MTN, AIRTEL, GLOBACOM, ETISALAT and NITEL. We proposed a model for decision making in this sector by examining the rationale behind customers' ...

  15. A semi-automated approach for generating natural language requirements documents based on business process models

    NARCIS (Netherlands)

    Aysolmaz, Banu; Leopold, Henrik; Reijers, Hajo A.; Demirörs, Onur

    2018-01-01

    Context: The analysis of requirements for business-related software systems is often supported by using business process models. However, the final requirements are typically still specified in natural language. This means that the knowledge captured in process models must be consistently

  16. Production versus environmental impact trade-offs for Swiss cropping systems: a model-based approach

    Science.gov (United States)

    Necpalova, Magdalena; Lee, Juhwan; Six, Johan

    2017-04-01

    There is a growing need to improve sustainability of agricultural systems. The key focus remains on optimizing current production systems in order to deliver food security at low environmental costs. It is therefore essential to identify and evaluate agricultural management practices for their potential to maintain or increase productivity and mitigate climate change and N pollution. Previous research on Swiss cropping systems has been concentrated on increasing crop productivity and soil fertility. Thus, relatively little is known about management effects on net soil greenhouse gas (GHG) emissions and environmental N losses in the long-term. The aim of this study was to extrapolate findings from Swiss long-term field experiments and to evaluate the system-level sustainability of a wide range of cropping systems under conditions beyond field experimentation by comparing their crop productivity and impacts on soil carbon, net soil GHG emissions, NO3 leaching and soil N balance over 30 years. The DayCent model was previously parameterized for common Swiss crops and crop-specific management practices and evaluated for productivity, soil carbon dynamics and N2O emissions from Swiss cropping systems. Based on a prediction uncertainty criterion for crop productivity and soil carbon (rRMSEstatistical analyses, the systems were grouped into the following categories: (a) farming system: organic (ORG), integrated (IN) and mineral (MIN); (b) tillage: conventional (CT), reduced (RT) and no-till (NT); (c) cover cropping: no cover cropping (NCC), winter cover cropping (CC) and winter green manuring (GM). The productivity of Swiss cropping systems was mainly driven by total N inputs to the systems. The GWP of systems ranged from -450 to 1309 kg CO2 eq ha-1 yr-1. All studied systems, except for ORG-RT-GM systems, acted as a source of net soil GHG emissions with the relative contribution of soil N2O emissions to GWP of more than 60%. The GWP of systems with CT decreased

  17. Constructing a Travel Risks’ Evaluation Model for Tour Freelancers Based on the ANP Approach

    Directory of Open Access Journals (Sweden)

    Chin-Tsai Lin

    2016-01-01

    Full Text Available This study constructs a new travel risks’ evaluation model for freelancers to evaluate and select tour groups by considering the interdependencies of the evaluation criteria used. First of all, the proposed model adopts the Nominal Group Technique (NGT to identify suitable evaluation criteria for evaluating travel risks. Six evaluation criteria and 18 subcriteria are obtained. The six evaluation criteria are financial risk, transportation risk, social risk, hygiene risk, sightseeing spot risk, and general risk for freelancer tour groups. Secondly, the model uses the analytic network process (ANP to determine the relative weight of the criteria. Finally, examples of group package tours (GPTs are used to demonstrate the travel risk evaluation process for this model. The results show that the Tokyo GPT is the best group tour. The proposed model helps freelancers to effectively evaluate travel risks and decision-making, making it highly applicable to academia and tour groups.

  18. A Model-Based Approach to Attention and Sensory Integration in Postural Control of Older Adults

    OpenAIRE

    Mahboobin, Arash; Loughlin, Patrick J.; Redfern, Mark S.

    2007-01-01

    We conducted a dual-task experiment that involved information processing (IP) tasks concurrent with postural perturbations to explore the interaction between attention and sensory integration in postural control in young and older adults. A postural control model incorporating sensory integration and the influence of attention was fit to the data, from which parameters were then obtained to quantify the interference of attention on postural control. The model hypothesizes that the cognitive p...

  19. Combining emperical and theory-based land use modelling approaches to assess future availability of land and economic potential for sustainable biofuel production: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, Floortje; van Eijck, Janske; Faaij, André; Verstegen, Judith; Hilbert, J.; Carballo, S.; Volante, J.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  20. Combining empirical and theory-based land-use modelling approaches to assess economic potential of biofuel production avoiding iLUC: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, F.; van Eijck, J.; Verstegen, J.A.; Hilbert, J.; Carballo, S.; Volante, J.; Faaij, A.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  1. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    Science.gov (United States)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  2. A digital waveguide-based approach for Clavinet modeling and synthesis

    Science.gov (United States)

    Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan

    2013-12-01

    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.

  3. A Model-Based Approach for Joint Analysis of Pain Intensity and Opioid Consumption in Postoperative Pain

    DEFF Research Database (Denmark)

    Juul, Rasmus V; Knøsgaard, Katrine R; Olesen, Anne E

    2016-01-01

    Joint analysis of pain intensity and opioid consumption is encouraged in trials of postoperative pain. However, previous approaches have not appropriately addressed the complexity of their interrelation in time. In this study, we applied a non-linear mixed effects model to simultaneously study pain...... intensity and opioid consumption in a 4-h postoperative period for 44 patients undergoing percutaneous kidney stone surgery. Analysis was based on 748 Numerical Rating Scale (NRS) scores of pain intensity and 51 observed morphine and oxycodone dosing events. A joint model was developed to describe...... the recurrent pattern of four key phases determining the development of pain intensity and opioid consumption in time; (A) Distribution of pain intensity scores which followed a truncated Poisson distribution with time-dependent mean score ranging from 0.93 to 2.45; (B) Probability of transition to threshold...

  4. Data Model Approach And Markov Chain Based Analysis Of Multi-Level Queue Scheduling

    Directory of Open Access Journals (Sweden)

    Diwakar Shukla

    2010-01-01

    Full Text Available There are many CPU scheduling algorithms inliterature like FIFO, Round Robin, Shortest-Job-First and so on.The Multilevel-Queue-Scheduling is superior to these due to itsbetter management of a variety of processes. In this paper, aMarkov chain model is used for a general setup of Multilevelqueue-scheduling and the scheduler is assumed to performrandom movement on queue over the quantum of time.Performance of scheduling is examined through a rowdependent data model. It is found that with increasing value of αand d, the chance of system going over the waiting state reduces.At some of the interesting combinations of α and d, it diminishesto zero, thereby, provides us some clue regarding better choice ofqueues over others for high priority jobs. It is found that ifqueue priorities are added in the scheduling intelligently thenbetter performance could be obtained. Data model helpschoosing appropriate preferences.

  5. A Model-Based Approach to Recovering the Structure of a Plant from Images

    KAUST Repository

    Ward, Ben

    2015-03-19

    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.

  6. A model-based approach to attention and sensory integration in postural control of older adults.

    Science.gov (United States)

    Mahboobin, Arash; Loughlin, Patrick J; Redfern, Mark S

    2007-12-18

    We conducted a dual-task experiment that involved information processing (IP) tasks concurrent with postural perturbations to explore the interaction between attention and sensory integration in postural control in young and older adults. A postural control model incorporating sensory integration and the influence of attention was fit to the data, from which parameters were then obtained to quantify the interference of attention on postural control. The model hypothesizes that the cognitive processing and integration of sensory inputs for balance requires time, and that attention influences this processing time, as well as sensory selection by facilitating specific sensory channels. Performing a concurrent IP task had an overall effect on the time delay. Differences in the time delay of the postural control model were found for the older adults. The results suggest enhanced vulnerability of balance processes in older adults to interference from concurrent cognitive IP tasks.

  7. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    Directory of Open Access Journals (Sweden)

    Mohammad Mozumdar

    2014-06-01

    Full Text Available The Model Based Design (MBD approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL simulation.

  8. An Advanced Computational Approach to System of Systems Analysis & Architecting Using Agent-Based Behavioral Model

    Science.gov (United States)

    2013-03-29

    color or single word description. An overall evaluation, that combines several attributes, is still largely a gestalt of component attribute values...limits on our capacity for processing information. Psychological Review, 63(2), 81-97. NDIA. (11 October 2011). Best Practices Model for SoS Systems

  9. A novel modelling approach for condensing boilers based on hybrid dynamical systems

    NARCIS (Netherlands)

    Satyavada, H.; Baldi, S.

    2016-01-01

    Condensing boilers use waste heat from flue gases to pre-heat cold water entering the boiler. Flue gases are condensed into liquid form, thus recovering their latent heat of vaporization, which results in as much as 10%–12% increase in efficiency. Modeling these heat transfer phenomena is crucial to

  10. Mathematical modeling based on ordinary differential equations: A promising approach to vaccinology.

    Science.gov (United States)

    Bonin, Carla Rezende Barbosa; Fernandes, Guilherme Cortes; Dos Santos, Rodrigo Weber; Lobosco, Marcelo

    2017-02-01

    New contributions that aim to accelerate the development or to improve the efficacy and safety of vaccines arise from many different areas of research and technology. One of these areas is computational science, which traditionally participates in the initial steps, such as the pre-screening of active substances that have the potential to become a vaccine antigen. In this work, we present another promising way to use computational science in vaccinology: mathematical and computational models of important cell and protein dynamics of the immune system. A system of Ordinary Differential Equations represents different immune system populations, such as B cells and T cells, antigen presenting cells and antibodies. In this way, it is possible to simulate, in silico, the immune response to vaccines under development or under study. Distinct scenarios can be simulated by varying parameters of the mathematical model. As a proof of concept, we developed a model of the immune response to vaccination against the yellow fever. Our simulations have shown consistent results when compared with experimental data available in the literature. The model is generic enough to represent the action of other diseases or vaccines in the human immune system, such as dengue and Zika virus.

  11. A polygon-based modeling approach to assess exposure of resources and assets to wildfire

    Science.gov (United States)

    Matthew P. Thompson; Joe Scott; Jeffrey D. Kaiden; Julie W. Gilbertson-Day

    2013-01-01

    Spatially explicit burn probability modeling is increasingly applied to assess wildfire risk and inform mitigation strategy development. Burn probabilities are typically expressed on a per-pixel basis, calculated as the number of times a pixel burns divided by the number of simulation iterations. Spatial intersection of highly valued resources and assets (HVRAs) with...

  12. Modeling of twisted and coiled polymer (TCP) muscle based on phenomenological approach

    Science.gov (United States)

    Karami, Farzad; Tadesse, Yonas

    2017-12-01

    Twisted and coiled polymers (TCP) muscles are linear actuators that respond to change in temperature. Exploiting high negative coefficient of thermal expansion (CTE) and helical geometry give them a significant ability to change length in a limited temperature range. Several applications and experimental data of these materials have been demonstrated in the last few years. To use these actuators in robotics and control system applications, a mathematical model for predicting their behavior is essential. In this work, a practical and accurate phenomenological model for estimating the displacement of TCP muscles, as a function of the load as well as input electrical current, is proposed. The problem is broken down into two parts, i.e. modeling of the electro-thermal and then the thermo-elastic behavior of the muscles. For the first part, a differential equation, with changing electrical resistance term, is derived. Next, by using a temperature-dependent modulus of elasticity and CTE as well as taking the geometry of the muscles into account, an expression for displacement is derived. Experimental data for different loads and actuation current levels are used for verifying the model and investigating its accuracy. The result shows a good agreement between the simulation and experimental results for all loads.

  13. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  14. Impact of Asian Aerosols on Precipitation Over California: An Observational and Model Based Approach

    Science.gov (United States)

    Naeger, Aaron R.; Molthan, Andrew L.; Zavodsky, Bradley T.; Creamean, Jessie M.

    2015-01-01

    Dust and pollution emissions from Asia are often transported across the Pacific Ocean to over the western United States. Therefore, it is essential to fully understand the impact of these aerosols on clouds and precipitation forming over the eastern Pacific and western United States, especially during atmospheric river events that account for up to half of California's annual precipitation and can lead to widespread flooding. In order for numerical modeling simulations to accurately represent the present and future regional climate of the western United States, we must account for the aerosol-cloud-precipitation interactions associated with Asian dust and pollution aerosols. Therefore, we have constructed a detailed study utilizing multi-sensor satellite observations, NOAA-led field campaign measurements, and targeted numerical modeling studies where Asian aerosols interacted with cloud and precipitation processes over the western United States. In particular, we utilize aerosol optical depth retrievals from the NASA Moderate Resolution Imaging Spectroradiometer (MODIS), NOAA Geostationary Operational Environmental Satellite (GOES-11), and Japan Meteorological Agency (JMA) Multi-functional Transport Satellite (MTSAT) to effectively detect and monitor the trans-Pacific transport of Asian dust and pollution. The aerosol optical depth (AOD) retrievals are used in assimilating the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) in order to provide the model with an accurate representation of the aerosol spatial distribution across the Pacific. We conduct WRF-Chem model simulations of several cold-season atmospheric river events that interacted with Asian aerosols and brought significant precipitation over California during February-March 2011 when the NOAA CalWater field campaign was ongoing. The CalWater field campaign consisted of aircraft and surface measurements of aerosol and precipitation processes that help extensively validate our WRF

  15. Applying Regression Models with Mixed Frequency Data in Modeling and Prediction of Iran's Wheat Import Value (Generalized OLS-based ARDL Approach

    Directory of Open Access Journals (Sweden)

    mitra jalerajabi

    2014-10-01

    Full Text Available Due to the importance of the import management, this study applies generalized ARDL approach to estimate MIDAS regression for wheat import value and to compare the accuracy of forecasts with those competed by the regression with adjusted data model. Mixed frequency sampling models aim to extract information with high frequency indicators so that independent variables with lower frequencies are modeled and foorcasted. Due to a more precise identification of the relationships among the variables, more accurate prediction is expected. Based on the results of both estimated regression with adjusted frequency models and MIDAS for the years 1978-2003 as a training period, wheat import value with internal products and exchange rate was positively related, while the relative price variable had an adverse relation with the Iran's wheat import value. Based on the results from the conventional statistics such as RMSE, MAD, MAPE and the statistical significance, MIDAS models using data sets of annual wheat import value, internal products, relative price and seasonal exchange rate significantly improves prediction of annual wheat import value for the years2004-2008 as a testing period. Hence, it is recommended that applying prediction approaches with mixed data improves modeling and prediction of agricultural import value, especially for strategic import products.

  16. A stepwise model for simulation-based curriculum development for clinical skills, a modification of the six-step approach.

    Science.gov (United States)

    Khamis, Nehal N; Satava, Richard M; Alnassar, Sami A; Kern, David E

    2016-01-01

    Despite the rapid growth in the use of simulation in health professions education, courses vary considerably in quality. Many do not integrate efficiently into an overall school/program curriculum or conform to academic accreditation requirements. Moreover, some of the guidelines for simulation design are specialty specific. We designed a model that integrates best practices for effective simulation-based training and a modification of Kern et al.'s 6-step approach for curriculum development. We invited international simulation and health professions education experts to complete a questionnaire evaluating the model. We reviewed comments and suggested modifications from respondents and reached consensus on a revised version of the model. We recruited 17 simulation and education experts. They expressed a consensus on the seven proposed curricular steps: problem identification and general needs assessment, targeted needs assessment, goals and objectives, educational strategies, individual assessment/feedback, program evaluation, and implementation. We received several suggestions for descriptors that applied the steps to simulation, leading to some revisions in the model. We have developed a model that integrates principles of curriculum development and simulation design that is applicable across specialties. Its use could lead to high-quality simulation courses that integrate efficiently into an overall curriculum.

  17. A new model for quantum games based on the Marinatto–Weber approach

    International Nuclear Information System (INIS)

    Frąckiewicz, Piotr

    2013-01-01

    The Marinatto–Weber approach to quantum games is a straightforward way to apply the power of quantum mechanics to classical game theory. In the simplest case, the quantum scheme is that players manipulate their own qubits of a two-qubit state either with the identity 1 or the Pauli operator σ x . However, such a simplification of the scheme raises doubt as to whether it could really reflect a quantum game. In this paper we put forward examples which may constitute arguments against the present form of the Marinatto–Weber scheme. Next, we modify the scheme to eliminate the undesirable properties of the protocol by extending the players’ strategy sets. (paper)

  18. Modeling Pedestrian’s Conformity Violation Behavior: A Complex Network Based Approach

    Directory of Open Access Journals (Sweden)

    Zhuping Zhou

    2014-01-01

    Full Text Available Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network’s degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian’s illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian’s conformity violation behavior will increase as the spreading rate increases.

  19. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    Science.gov (United States)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  20. Modelling and Analysis for Cyber-Physical Systems: An SMT-based approach

    DEFF Research Database (Denmark)

    Dung, Phan Anh

    Calculus has shown its potential as a domain specific language in a Smart Meter case study. Moreover, counting semantics has proven useful in connection with tool-based support for Duration Calculus. To extend SMT techniques towards better support for analysis of CPS, we proposed algorithms for handling...

  1. Bridging the Gap between Expert-Novice Differences: The Model-Based Feedback Approach

    Science.gov (United States)

    Ifenthaler, Dirk

    2011-01-01

    The study adds to the body of knowledge about different types of feedback. Feedback is considered a fundamental component for supporting and regulating learning processes. Especially in computer-based and self-regulated learning environments, the nature of feedback is of critical importance. Hence, this study investigates different types of…

  2. Modeling Zombie Outbreaks: A Problem-Based Approach to Improving Mathematics One Brain at a Time

    Science.gov (United States)

    Lewis, Matthew; Powell, James A.

    2016-01-01

    A great deal of educational literature has focused on problem-based learning (PBL) in mathematics at the primary and secondary level, but arguably there is an even greater need for PBL in college math courses. We present a project centered around the Humans versus Zombies moderated tag game played on the Utah State University campus. We discuss…

  3. An equalised global graphical model-based approach for multi-camera object tracking

    OpenAIRE

    Chen, Weihua; Cao, Lijun; Chen, Xiaotang; Huang, Kaiqi

    2015-01-01

    Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with...

  4. Evaluating Outdoor Water Use Demand under Changing Climatic and Demographic Conditions: An Agent-based Modeling Approach

    Science.gov (United States)

    Kanta, L.; Berglund, E. Z.; Soh, M. H.

    2017-12-01

    Outdoor water-use for landscape and irrigation constitutes a significant end-use in total residential water demand. In periods of water shortages, utilities may reduce garden demands by implementing irrigation system audits, rebate programs, local ordinances, and voluntary or mandatory water-use restrictions. Because utilities do not typically record outdoor and indoor water-uses separately, the effects of policies for reducing garden demands cannot be readily calculated. The volume of water required to meet garden demands depends on the housing density, lawn size, type of vegetation, climatic conditions, efficiency of garden irrigation systems, and consumer water-use behaviors. Many existing outdoor demand estimation methods are deterministic and do not include consumer responses to conservation campaigns. In addition, mandatory restrictions may have a substantial impact on reducing outdoor demands, but the effectiveness of mandatory restrictions depends on the timing and the frequency of restrictions, in addition to the distribution of housing density and consumer types within a community. This research investigates a garden end-use model by coupling an agent-based modeling approach and a mechanistic-stochastic water demand model to create a methodology for estimating garden demand and evaluating demand reduction policies. The garden demand model is developed for two water utilities, using a diverse data sets, including residential customer billing records, outdoor conservation programs, frequency and type of mandatory water-use restrictions, lot size distribution, population growth, and climatic data. A set of garden irrigation parameter values, which are based on the efficiency of irrigation systems and irrigation habits of consumers, are determined for a set of conservation ordinances and restrictions. The model parameters are then validated using customer water usage data from the participating water utilities. A sensitivity analysis is conducted for garden

  5. modeling, observation and control, a multi-model approach

    OpenAIRE

    Elkhalil, Mansoura

    2011-01-01

    This thesis is devoted to the control of systems which dynamics can be suitably described by a multimodel approach from an investigation study of a model reference adaptative control performance enhancement. Four multimodel control approaches have been proposed. The first approach is based on an output reference model control design. A successful experimental validation involving a chemical reactor has been carried out. The second approach is based on a suitable partial state model reference ...

  6. A collaborative knowledge management framework for supply chains: A UML-based model approach

    Directory of Open Access Journals (Sweden)

    Jorge Esteban Hernández

    2008-12-01

    Full Text Available In the most general cases, collaborative activities imply a distributed decision-making process which involves several supply chain nodes. In this paper, by means of a literature review, and by also considering the deficiencies of existing proposals, a collaborative knowledge management UML-based framework supported is proposed. In addition, this proposal synthesizes existing knowledge, and it not only fulfils, but enriches, each component with the modeller’s own knowledge.

  7. A Ground-Up Model for Gun Violence Reduction: A Community-Based Public Health Approach.

    Science.gov (United States)

    Byrdsong, T Rashad; Devan, Angela; Yamatani, Hide

    2016-01-01

    The suggested strategy for the reduction of violence is to collaboratively address the problem, based on an intervention system focused on prevention, rehabilitation, and development. This strategy is capable of engaging community residents in positive ways, and it empowers them to take ownership and sustain much-needed resident commitments to achieve long-term public safety. The community residents largely insist that over-reliance on law enforcement to control violence invites further affliction among Black youth and adults.

  8. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    Science.gov (United States)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  9. Influence of dental restorations and mastication loadings on dentine fatigue behaviour: Image-based modelling approach.

    Science.gov (United States)

    Vukicevic, Arso M; Zelic, Ksenija; Jovicic, Gordana; Djuric, Marija; Filipovic, Nenad

    2015-05-01

    The aim of this study was to use Finite Element Analysis (FEA) to estimate the influence of various mastication loads and different tooth treatments (composite restoration and endodontic treatment) on dentine fatigue. The analysis of fatigue behaviour of human dentine in intact and composite restored teeth with root-canal-treatment using FEA and fatigue theory was performed. Dentine fatigue behaviour was analysed in three virtual models: intact, composite-restored and endodontically-treated tooth. Volumetric change during the polymerization of composite was modelled by thermal expansion in a heat transfer analysis. Low and high shrinkage stresses were obtained by varying the linear shrinkage of composite. Mastication forces were applied occlusally with the load of 100, 150 and 200N. Assuming one million cycles, Fatigue Failure Index (FFI) was determined using Goodman's criterion while residual fatigue lifetime assessment was performed using Paris-power law. The analysis of the Goodman diagram gave both maximal allowed crack size and maximal number of cycles for the given stress ratio. The size of cracks was measured on virtual models. For the given conditions, fatigue-failure is not likely to happen neither in the intact tooth nor in treated teeth with low shrinkage stress. In the cases of high shrinkage stress, crack length was much larger than the maximal allowed crack and failure occurred with 150 and 200N loads. The maximal allowed crack size was slightly lower in the tooth with root canal treatment which induced somewhat higher FFI than in the case of tooth with only composite restoration. Main factors that lead to dentine fatigue are levels of occlusal load and polymerization stress. However, root canal treatment has small influence on dentine fatigue. The methodology proposed in this study provides a new insight into the fatigue behaviour of teeth after dental treatments. Furthermore, it estimates maximal allowed crack size and maximal number of cycles for a

  10. COS-7-based model: methodological approach to study John Cunningham virus replication cycle.

    Science.gov (United States)

    Prezioso, C; Scribano, D; Rodio, D M; Ambrosi, C; Trancassini, M; Palamara, A T; Pietropaolo, V

    2018-02-05

    John Cunningham virus (JCV) is a human neurotropic polyomavirus whose replication in the Central Nervous System (SNC) induces the fatal demyelinating disease, progressive multifocal leukoencephalopathy (PML). JCV propagation and PML investigation have been severely hampered by the lack of an animal model and cell culture systems to propagate JCV have been very limited in their availability and robustness. We previously confirmed that JCV CY strain efficiently replicated in COS-7 cells as demonstrated by the progressive increase of viral load by quantitative PCR (Q-PCR) during the time of transfection and that archetypal regulatory structure was maintained, although two characteristic point mutations were detected during the viral cycle. This short report is an important extension of our previous efforts in defining our reliable model culture system able to support a productive JCV infection.Supernatants collected from transfected cells have been used to infect freshly seeded COS-7 cell line. An infectious viral progeny was obtained as confirmed by Western blot and immunofluorescence assay. During infection, the archetype regulatory region was conserved.Importantly, in this study we developed an improved culture system to obtain a large scale production of JC virus in order to study the genetic features, the biology and the pathogenic mechanisms of JC virus that induce PML.

  11. PAGIS summary report of phase 1: a common methodological approach based on European data and models

    International Nuclear Information System (INIS)

    Cadelli, N.; Cottone, G.; Bertozzi, G.; Girardi, F.

    1984-01-01

    Since 1982 a joint study has been launched by the CEC with the participation of national institutions in the E.C., aiming at a Performance Assessment of Geological Isolation Systems (PAGIS) for HLW disposal. This document is a summary of the first phase of the study which was devoted to the collection of data and models and to the choice of an appropriate methodology. To this purpose, real or national sites have been chosen, which are representative of three types of continental geological formations in the E.C.: clay, granite and salt (although the choices imply no committment of any kind about their final use). Moreover, sub-seabed areas have also been identified. The study covers the following items: - basic data on waste characteristics, site data and repository designs; - methodology, which allows sensitivity and uncertainty analyses to be performed, as well as the assessment of radiation doses to individuals and populations; - preliminary modelling of radionuclide release and migration through the geosphere (near- and far-field) and the biosphere following their various pathways to man; - selection of the most relevant radionuclide release scenarios and their probability of occurrence. Reference values have been selected for the basic data as well as variants covering the various options which are under consideration in the different Countries of the E.C.

  12. Modelling metal-humate interactions: an approach based on the Gibbs-Donnan concept

    International Nuclear Information System (INIS)

    Ephraim, J.H.

    1995-01-01

    Humic and fulvic acids constitute an appreciable portion of organic substances in both aquatic and terrestrial environments. Their ability to sequester metal ions and other trace elements has engaged the interest of numerous environmental scientists recently and even though considerable advances have been made, a lot more remains unknown in the area. The existence of high molecular weight fractions and functional group heterogeneity have endowed ion exchange characteristics to these substances. For example, the cation exchange capacities of some humic substances have been compared to those of smectites. Recent development in the solution chemistry has also indicated that humic substances have the capability to interact with other anions because of their amphiphilic nature. In this paper, metal-humate interaction is described by relying heavily on information obtained from treatment of the solution chemistry of ion exchangers as typical polymers. In such a treatment, the perturbations to the metal-humate interaction are estimated by resort to the Gibbs-Donnan concept where the humic substance molecule is envisaged as having a potential counter-ion concentrating region around its molecular domain into which diffusible components can enter or leave depending on their corresponding electrochemical potentials. Information from studies with ion exchangers have been adapted to describe ionic equilibria involving these substances by making it possible to characterise the configuration/conformation of these natural organic acids and to correct for electrostatic effects in the metal-humate interaction. The resultant unified physicochemical approach has facilitated the identification and estimation of the complications to the solution chemistry of humic substances. (authors). 15 refs., 1 fig

  13. Tracking Control of A Balancing Robot – A Model-Based Approach

    Directory of Open Access Journals (Sweden)

    Zaiczek Tobias

    2014-08-01

    Full Text Available This paper presents a control concept for a single-axle mobile robot moving on the horizontal plane. A mathematical model of the nonholonomic mechanical system is derived using Hamel's equations of motion. Subsequently, a concept for a tracking controller is described in detail. This controller keeps the mobile robot on a given reference trajectory while maintaining it in an upright position. The control objective is reached by a cascade control structure. By an appropriate input transformation, we are able to utilize an input-output linearization of a subsystem. For the remaining dynamics a linear set-point control law is presented. Finally, the performance of the implemented control law is illustrated by simulation results.

  14. Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering

    Science.gov (United States)

    Atkinson, Colin

    The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.

  15. Towards a dynamic assessment of raw materials criticality: linking agent-based demand--with material flow supply modelling approaches.

    Science.gov (United States)

    Knoeri, Christof; Wäger, Patrick A; Stamp, Anna; Althaus, Hans-Joerg; Weil, Marcel

    2013-09-01

    Emerging technologies such as information and communication-, photovoltaic- or battery technologies are expected to increase significantly the demand for scarce metals in the near future. The recently developed methods to evaluate the criticality of mineral raw materials typically provide a 'snapshot' of the criticality of a certain material at one point in time by using static indicators both for supply risk and for the impacts of supply restrictions. While allowing for insights into the mechanisms behind the criticality of raw materials, these methods cannot account for dynamic changes in products and/or activities over time. In this paper we propose a conceptual framework intended to overcome these limitations by including the dynamic interactions between different possible demand and supply configurations. The framework integrates an agent-based behaviour model, where demand emerges from individual agent decisions and interaction, into a dynamic material flow model, representing the materials' stocks and flows. Within the framework, the environmental implications of substitution decisions are evaluated by applying life-cycle assessment methodology. The approach makes a first step towards a dynamic criticality assessment and will enhance the understanding of industrial substitution decisions and environmental implications related to critical metals. We discuss the potential and limitation of such an approach in contrast to state-of-the-art methods and how it might lead to criticality assessments tailored to the specific circumstances of single industrial sectors or individual companies. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    to observe and analyze the workings of a development project. I have been working as part of the team assembled for the development of the information system based on AMM for a period of three years. My active participation in the development project granted me access to all the actors involved. In my role I...... through negotiation and democratic decision making will it be possible for the team members to have their current weltanschauung represented in decision making. Thirdly, geographical distribution and loose coupling foster individualist rather than group behavior. The more the social tissue is disconnected...... of the technology, the development team was formed by individuals from both universities and the private sector. The organization of the development team was geographically distributed and loosely coupled. The development of information systems has always been a difficult activity and the records show a remarkable...

  17. Using a consensus approach based on the conservation of inter-residue contacts to rank CAPRI models

    KAUST Repository

    Vangone, Anna

    2013-10-17

    Herein we propose the use of a consensus approach, CONSRANK, for ranking CAPRI models. CONSRANK relies on the conservation of inter-residue contacts in the analyzed decoys ensemble. Models are ranked according to their ability to match the most frequently observed contacts. We applied CONSRANK to 19 CAPRI protein-protein targets, covering a wide range of prediction difficulty and involved in a variety of biological functions. CONSRANK results are consistently good, both in terms of native-like (NL) solutions ranked in the top positions and of values of the Area Under the receiver operating characteristic Curve (AUC). For targets having a percentage of NL solutions above 3%, an excellent performance is found, with AUC values approaching 1. For the difficult target T46, having only 3.4% NL solutions, the number of NL solutions in the top 5 and 10 ranked positions is enriched by a factor 30, and the AUC value is as high as 0.997. AUC values below 0.8 are only found for targets featuring a percentage of NL solutions within 1.1%. Remarkably, a false consensus emerges only in one case, T42, which happens to be an artificial protein, whose assembly details remain uncertain, based on controversial experimental data. We also show that CONSRANK still performs very well on a limited number of models, provided that more than 1 NL solution is included in the ensemble, thus extending its applicability to cases where few dozens of models are available.© 2013 Wiley Periodicals, Inc.

  18. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  19. Atmospheric Rotational Effects on Mars Based on the NASA Ames General Circulation Model: Angular Momentum Approach

    Science.gov (United States)

    Sanchez, Braulio V.; Haberle, Robert M.; Schaeffer, James

    2004-01-01

    The objective of the investigation is to determine the motion of the rotational axis of Mars as a result of mass variations in the atmosphere and condensation and sublimation of CO2 ice on the polar caps. A planet experiences this type of motion if it has an atmosphere, which is changing its mass distribution with respect to the solid body of the planet and/or it is asymmetrically changing the amount of ice at the polar caps. The physical principle involved is the conservation of angular momentum, one can get a feeling for it by sitting on a well oiled swivel chair holding a rotating wheel on a horizontal direction and then changing the rotation axis of the wheel to a vertical direction. The person holding the wheel and the chair would begin to rotate in opposite direction to the rotation of the wheel. The motions of Mars atmosphere and the ice caps variations are obtained from a mathematical model developed at the NASA Ames Research Center. The model produces outputs for a time span of one Martian year, which is equivalent to 687 Earth days. The results indicate that Mars axis of rotation moves in a spiral with respect to a reference point on the surface of the planet. It can move as far away as 35.3 cm from the initial location as a result of both mass variations in the atmosphere and asymmetric ice variations at the polar caps. Furthermore the pole performs close to two revolutions around the reference point during a Martian year. This motion is a combination of two motions, one produced by the atmospheric mass variations and another due to the variations in the ice caps. The motion due to the atmospheric variations is a spiral performing about two and a half revolutions around the reference point during which the pole can move as far as 40.9 cm. The motion due to variations in the ice caps is a spiral performing almost three revolutions during which the pole can move as far as 32.8 cm.

  20. Modelling Inductive Charging of Battery Electric Vehicles using an Agent-Based Approach

    Directory of Open Access Journals (Sweden)

    Zain Ul Abedin

    2014-09-01

    Full Text Available The introduction of battery electric vehicles (BEVs could help to reduce dependence on fossil fuels and emissions from transportation and as such increase energy security and foster sustainable use of energy resources. However a major barrier to the introduction of BEVs is their limited battery capacity and long charging durations. To address these issues of BEVs several solutions are proposed such as battery swapping and fast charging stations. However apart from these stationary modes of charging, recently a new mode of charging has been introduced which is called inductive charging. This allows charging of BEVs as they drive along roads without the need of plugs, using induction. But it is unclear, if and how such technology could be utilized best. In order to investigate the possible impact of the introduction of such inductive charging infrastructure, its potential and its optimal placement, a framework for simulating BEVs using a multi-agent transport simulation was used. This framework was extended by an inductive charging module and initial test runs were performed. In this paper we present the simulation results of these preliminary tests together with analysis which suggests that battery sizes of BEVs could be reduced even if inductive charging technology is implemented only at a small number of high traffic volume links. The paper also demonstrates that our model can effectively support policy and decision making for deploying inductive charging infrastructure.

  1. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite-rheological ......This report is part of a research project on "Control of Early Age Cracking" - which, in turn, is part of the major research programme, "High Performance Concrete - The Contractor's Technology (HETEK)", coordinated by the Danish Road Directorate, Copenhagen, Denmark, 1997.A composite......-rheological model of concrete is presented by which consistent predictions of creep, relaxation, and internal stresses can be made from known concrete composition, age at loading, and climatic conditions. No other existing "creep prediction method" offers these possibilities in one approach.The model...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  2. A physical based equivalent circuit modeling approach for ballasted InP DHBT multi-finger devices at millimeter-wave frequencies

    DEFF Research Database (Denmark)

    Midili, Virginio; Squartecchia, Michele; Johansen, Tom Keinicke

    2016-01-01

    Multifinger InP DHBTs can be designed with a ballasting resistor to improve power capability. However accurate modeling is needed to predict high frequency behavior of the device. This paper presents two distinct modeling approaches: one based on EM simulations and one based on a physical...

  3. Group size, grooming and fission in primates: a modeling approach based on group structure.

    Science.gov (United States)

    Sueur, Cédric; Deneubourg, Jean-Louis; Petit, Odile; Couzin, Iain D

    2011-03-21

    In social animals, fission is a common mode of group proliferation and dispersion and may be affected by genetic or other social factors. Sociality implies preserving relationships between group members. An increase in group size and/or in competition for food within the group can result in decrease certain social interactions between members, and the group may split irreversibly as a consequence. One individual may try to maintain bonds with a maximum of group members in order to keep group cohesion, i.e. proximity and stable relationships. However, this strategy needs time and time is often limited. In addition, previous studies have shown that whatever the group size, an individual interacts only with certain grooming partners. There, we develop a computational model to assess how dynamics of group cohesion are related to group size and to the structure of grooming relationships. Groups' sizes after simulated fission are compared to observed sizes of 40 groups of primates. Results showed that the relationship between grooming time and group size is dependent on how each individual attributes grooming time to its social partners, i.e. grooming a few number of preferred partners or grooming equally or not all partners. The number of partners seemed to be more important for the group cohesion than the grooming time itself. This structural constraint has important consequences on group sociality, as it gives the possibility of competition for grooming partners, attraction for high-ranking individuals as found in primates' groups. It could, however, also have implications when considering the cognitive capacities of primates. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. A Model of e-Learning by Constructivism Approach Using Problem-Based Learning to Develop Thinking Skills for Students in Rajaghat University

    Science.gov (United States)

    Shutimarrungson, Werayut; Pumipuntu, Sangkom; Noirid, Surachet

    2014-01-01

    This research aimed to develop a model of e-learning by using Problem-Based Learning--PBL to develop thinking skills for students in Rajabhat University. The research is divided into three phases through the e-learning model via PBL with Constructivism approach as follows: Phase 1 was to study characteristics and factors through the model to…

  5. Hybrid artificial intelligence approach based on neural fuzzy inference model and metaheuristic optimization for flood susceptibilitgy modeling in a high-frequency tropical cyclone area using GIS

    Science.gov (United States)

    Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi

    2016-09-01

    This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.

  6. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  7. An Integrated model for Product Quality Development—A case study on Quality functions deployment and AHP based approach

    Science.gov (United States)

    Maitra, Subrata; Banerjee, Debamalya

    2010-10-01

    Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.

  8. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    Science.gov (United States)

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A Numerical Study of Forbush Decreases with a 3D Cosmic-Ray Modulation Model Based on an SDE Approach

    International Nuclear Information System (INIS)

    Luo, Xi; Feng, Xueshang; Potgieter, Marius S.; Zhang, Ming

    2017-01-01

    Based on the reduced diffusion mechanism for producing Forbush decreases (Fds) in the heliosphere, we constructed a three-dimensional (3D) diffusion barrier, and by incorporating it into a stochastic differential equation (SDE) based time-dependent, cosmic-ray transport model, a 3D numerical model for simulating Fds is built and applied to a period of relatively quiet solar activity. This SDE model generally corroborates previous Fd simulations concerning the effects of the solar magnetic polarity, the tilt angle of the heliospheric current sheet (HCS), and cosmic-ray particle energy. Because the modulation processes in this 3D model are multi-directional, the barrier’s geometrical features affect the intensity profiles of Fds differently. We find that both the latitudinal and longitudinal extent of the barrier have relatively fewer effects on these profiles than its radial extent and the level of decreased diffusion inside the disturbance. We find, with the 3D approach, that the HCS rotational motion causes the relative location from the observation point to the HCS to vary, so that a periodic pattern appears in the cosmic-ray intensity at the observing location. Correspondingly, the magnitude and recovery time of an Fd change, and the recovering intensity profile contains oscillation as well. Investigating the Fd magnitude variation with heliocentric radial distance, we find that the magnitude decreases overall and, additionally, that the Fd magnitude exhibits an oscillating pattern as the radial distance increases, which coincides well with the wavy profile of the HCS under quiet solar modulation conditions.

  10. A Vulnerability-Based, Bottom-up Assessment of Future Riverine Flood Risk Using a Modified Peaks-Over-Threshold Approach and a Physically Based Hydrologic Model

    Science.gov (United States)

    Knighton, James; Steinschneider, Scott; Walter, M. Todd

    2017-12-01

    There is a chronic disconnection among purely probabilistic flood frequency analysis of flood hazards, flood risks, and hydrological flood mechanisms, which hamper our ability to assess future flood impacts. We present a vulnerability-based approach to estimating riverine flood risk that accommodates a more direct linkage between decision-relevant metrics of risk and the dominant mechanisms that cause riverine flooding. We adapt the conventional peaks-over-threshold (POT) framework to be used with extreme precipitation from different climate processes and rainfall-runoff-based model output. We quantify the probability that at least one adverse hydrologic threshold, potentially defined by stakeholders, will be exceeded within the next N years. This approach allows us to consider flood risk as the summation of risk from separate atmospheric mechanisms, and supports a more direct mapping between hazards and societal outcomes. We perform this analysis within a bottom-up framework to consider the relevance and consequences of information, with varying levels of credibility, on changes to atmospheric patterns driving extreme precipitation events. We demonstrate our proposed approach using a case study for Fall Creek in Ithaca, NY, USA, where we estimate the risk of stakeholder-defined flood metrics from three dominant mechanisms: summer convection, tropical cyclones, and spring rain and snowmelt. Using downscaled climate projections, we determine how flood risk associated with a subset of mechanisms may change in the future, and the resultant shift to annual flood risk. The flood risk approach we propose can provide powerful new insights into future flood threats.

  11. A Model Based Deconvolution Approach for Creating Surface Composition Maps of Irregularly Shaped Bodies from Limited Orbiting Nuclear Spectrometer Measurements

    Science.gov (United States)

    Dallmann, N. A.; Carlsten, B. E.; Stonehill, L. C.

    2017-12-01

    Orbiting nuclear spectrometers have contributed significantly to our understanding of the composition of solar system bodies. Gamma rays and neutrons are produced within the surfaces of bodies by impacting galactic cosmic rays (GCR) and by intrinsic radionuclide decay. Measuring the flux and energy spectrum of these products at one point in an orbit elucidates the elemental content of the area in view. Deconvolution of measurements from many spatially registered orbit points can produce detailed maps of elemental abundances. In applying these well-established techniques to small and irregularly shaped bodies like Phobos, one encounters unique challenges beyond those of a large spheroid. Polar mapping orbits are not possible for Phobos and quasistatic orbits will realize only modest inclinations unavoidably limiting surface coverage and creating North-South ambiguities in deconvolution. The irregular shape causes self-shadowing both of the body to the spectrometer but also of the body to the incoming GCR. The view angle to the surface normal as well as the distance between the surface and the spectrometer is highly irregular. These characteristics can be synthesized into a complicated and continuously changing measurement system point spread function. We have begun to explore different model-based, statistically rigorous, iterative deconvolution methods to produce elemental abundance maps for a proposed future investigation of Phobos. By incorporating the satellite orbit, the existing high accuracy shape-models of Phobos, and the spectrometer response function, a detailed and accurate system model can be constructed. Many aspects of this model formation are particularly well suited to modern graphics processing techniques and parallel processing. We will present the current status and preliminary visualizations of the Phobos measurement system model. We will also discuss different deconvolution strategies and their relative merit in statistical rigor, stability

  12. Modelling Public Transport On-board Congestion: Comparing Schedule-based and Agent-based Assignment Approaches and their Implications

    NARCIS (Netherlands)

    Cats, O.; Hartl, Maximilian

    2016-01-01

    Transit systems are subject to congestion that influences system performance and level of service. The evaluation of measures to relieve congestion requires models that can capture their network effects and passengers' adaptation. In particular, on-board congestion leads to an increase of crowding

  13. Proposing a New Approach for Supplier Selection Based on Kraljic’s Model Using FMEA and Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    S. Mohammad Arabzad

    2012-06-01

    Full Text Available In recent years, numerous methods have been proposed to deal with supplier evaluation and selection problem, but a point which has been usually neglected by researchers is the role of purchasing items. The aim of this paper is to propose an integrated approach to select suppliers and allocate orders on the basis of the nature of the purchasing items which means that this issue plays an important role in supplier selection and order allocation. Therefore, items are first categorized according to the Kraljic’s model by the use of FMEA technique. Then, suppliers are categorized and evaluated in four phases with respect to different types of purchasing items (Strategic, Bottleneck, Leverage and Routine. Finally, an integer linear programming is utilized to allocate purchasing orders to suppliers. Furthermore, an empirical example is conducted to illustrate the stage of proposed approach. Results imply that ranking of suppliers and allocation of purchasing items based on the nature of purchasing items will create more capabilities in managing purchasing items and suppliers .

  14. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  15. An efficient approach to bioconversion kinetic model generation based on automated microscale experimentation integrated with model driven experimental design

    DEFF Research Database (Denmark)

    Chen, B. H.; Micheletti, M.; Baganz, F.

    2009-01-01

    Reliable models of enzyme kinetics are required for the effective design of bioconversion processes. Kinetic expressions of the enzyme-catalysed reaction rate however, are frequently complex and establishing accurate values of kinetic parameters normally requires a large number of experiments. Th...

  16. Combining Model-Based and Feature-Driven Diagnosis Approaches – A Case Study on Electromechanical Actuators

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this...

  17. Behavioral based safety approaches

    International Nuclear Information System (INIS)

    Maria Michael Raj, I.

    2009-01-01

    Approach towards the establishment of positive safety culture at Heavy Water Plant, Tuticorin includes the adoption of several important methodologies focused on human behavior and culminates with achievement of Total Safety Culture where Quality and Productivity are integrated with Safety

  18. Conflicts versus analytical redundancy relations: a comparative analysis of the model based diagnosis approach from the artificial intelligence and automatic control perspectives.

    Science.gov (United States)

    Cordier, Marie-Odile; Dague, Philippe; Lévy, François; Montmain, Jacky; Staroswiecki, Marcel; Travé-Massuyès, Louise

    2004-10-01

    Two distinct and parallel research communities have been working along the lines of the model-based diagnosis approach: the fault detection and isolation (FDI) community and the diagnostic (DX) community that have evolved in the fields of automatic control and artificial intelligence, respectively. This paper clarifies and links the concepts and assumptions that underlie the FDI analytical redundancy approach and the DX consistency-based logical approach. A formal framework is proposed in order to compare the two approaches and the theoretical proof of their equivalence together with the necessary and sufficient conditions is provided.

  19. A model-based assessment of the TrOCA approach for estimating anthropogenic carbon in the ocean

    Directory of Open Access Journals (Sweden)

    A. Yool

    2010-02-01

    Full Text Available The quantification of the amount of anthropogenic carbon (Cant that the ocean has taken up from the atmosphere since pre-industrial times is a challenging task because of the need to deconvolute this signal from the natural, unperturbed concentration of dissolved inorganic carbon (DIC. Nonetheless, a range of techniques have been devised that perform this separation using the information implicit in other physical, biogeochemical, and man-made ocean tracers. One such method is the TrOCA approach, which belongs to a group of back-calculation techniques, but relative to other methods employs a simple parameterization for estimating the preformed, pre-industrial concentration, the key quantity needed to determine Cant. Here we examine the theoretical foundation of the TrOCA approach and test its accuracy by deconvoluting the known distribution of Cant from an ocean general circulation model (OGCM simulation of the industrial period (1864–2004. We reveal that the TrOCA tracer reflects the air-sea exchange of both natural and anthropogenic CO2 as well as that of O2. Consequently, the determination of the anthropogenic CO2 flux component requires an accurate determination not only of the contribution of the natural (pre-industrial CO2 flux component, but also of the O2 flux component. The TrOCA method attempts to achieve this by assuming that the concentration changes invoked by these two air-sea flux components scale with temperature and alkalinity. While observations support a strong exponential scaling of the oxygen flux component with temperature, there exists no simple relationship of the natural CO2 flux component with temperature and/or alkalinity. This raises doubts whether the sum of these two components can be adequately parameterized with a single function. The analyses of the model support this conclusion, even when Cant is

  20. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Science.gov (United States)

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  1. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision

  2. A Model-based Approach to Scaling GPP and NPP in Support of MODIS Land Product Validation

    Science.gov (United States)

    Turner, D. P.; Cohen, W. B.; Gower, S. T.; Ritts, W. D.

    2003-12-01

    Global products from the Earth-orbiting MODIS sensor include land cover, leaf area index (LAI), FPAR, 8-day gross primary production (GPP), and annual net primary production (NPP) at the 1 km spatial resolution. The BigFoot Project was designed specifically to validate MODIS land products, and has initiated ground measurements at 9 sites representing a wide array of vegetation types. An ecosystem process model (Biome-BGC) is used to generate estimates of GPP and NPP for each 5 km x 5 km BigFoot site. Model inputs include land cover and LAI (from Landsat ETM+), daily meteorological data (from a centrally located eddy covariance flux tower), and soil characteristics. Model derived outputs are validated against field-measured NPP and flux tower-derived GPP. The resulting GPP and NPP estimates are then aggregated to the 1 km resolution for direct spatial comparison with corresponding MODIS products. At the high latitude sites (tundra and boreal forest), the MODIS GPP phenology closely tracks the BigFoot GPP, but there is a high bias in the MODIS GPP. In the temperate zone sites, problems with the timing and magnitude of the MODIS FPAR introduce differences in MODIS GPP compared to the validation data at some sites. However, the MODIS LAI/FPAR data are currently being reprocessed (=Collection 4) and new comparisons will be made for 2002. The BigFoot scaling approach permits precise overlap in spatial and temporal resolution between the MODIS products and BigFoot products, and thus permits the evaluation of specific components of the MODIS NPP algorithm. These components include meteorological inputs from the NASA Data Assimilation Office, LAI and FPAR from other MODIS algorithms, and biome-specific parameters for base respiration rate and light use efficiency.

  3. Model-based development and testing of advertising messages: A comparative study of two campaign proposals based on the MECCAS model and a conventional approach

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino

    2001-01-01

    model, i.e. means-ends based data collection employing the laddering method and subsequent use of the guidelines for message development formulated in MECCAS. The project was a joint venture of the Association of Danish Fruit Growers, Odense, Denmark, and the MAPP Centre, and was financed by EU funds......Traditionally, the development of advertising messages has been based on "creative independence", sometimes catalysed by inductively generated empirical data. Due to the recent intensified focus on advertising effectiveness, this state of affairs is beginning to change. The purpose of the study....... The comparison involved the efficiency of the managerial communication taking place in the message development process as well as target group communication effects. The managerial communication was studied by interviews with the involved advertising agency (Midtmarketing, Ikast, Denmark) and client staff...

  4. Model-based development and testing of advertising messages: A comparative study of two campaign proposals based on the MECCAS model and a conventional approach

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino

    . The managerial communication was studied by interviews with the advertising agency and client staff involved. The project is a joint venture of the Association of Danish Fruit Growers, Odense, Denmark, and the MAPP Centre, and is financed by EU funds. The advertising agency involved is Midtmarketing, Ikast......-agency and intra-agency) involved in the development of advertising messages. 3. The purpose of the study described in this paper is to compare the development and effects of two campaign proposals, with the common aim of increasing the consumption of apples among young Danes (18 to 35 years of age). One...... of the proposals is the result of an inductive-creative process, while the other is based on the MECCAS model, ie, means-end based data collection employing the laddering method and subsequent use of the guidelines for message development formulated in MECCAS. 4. The comparison involved target group communication...

  5. A data-model fusion approach for upscaling gross ecosystem productivity to the landscape scale based on remote sensing and flux footprint modelling

    Directory of Open Access Journals (Sweden)

    B. Chen

    2010-09-01

    Full Text Available In order to use the global available eddy-covariance (EC flux dataset and remote-sensing measurements to provide estimates of gross primary productivity (GPP at landscape (101–102 km2, regional (103–106 km2 and global land surface scales, we developed a satellite-based GPP algorithm using LANDSAT data and an upscaling framework. The satellite-based GPP algorithm uses two improved vegetation indices (Enhanced Vegetation Index – EVI, Land Surface Water Index – LSWI. The upscalling framework involves flux footprint climatology modelling and data-model fusion. This approach was first applied to an evergreen coniferous stand in the subtropical monsoon climatic zone of south China. The EC measurements at Qian Yan Zhou tower site (26°44´48" N, 115°04´13" E, which belongs to the China flux network and the LANDSAT and MODIS imagery data for this region in 2004 were used in this study. A consecutive series of LANDSAT-like images of the surface reflectance at an 8-day interval were predicted by blending the LANDSAT and MODIS images using an existing algorithm (ESTARFM: Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model. The seasonal dynamics of GPP were then predicted by the satellite-based algorithm. MODIS products explained 60% of observed variations of GPP and underestimated the measured annual GPP (= 1879 g C m−2 by 25–30%; while the satellite-based algorithm with default static parameters explained 88% of observed variations of GPP but overestimated GPP during the growing seasonal by about 20–25%. The optimization of the satellite-based algorithm using a data-model fusion technique with the assistance of EC flux tower footprint modelling reduced the biases in daily GPP estimations from about 2.24 g C m−2 day−1 (non-optimized, ~43.5% of mean measured daily value to 1.18 g C m−2 day−1 (optimized

  6. An Overview of Modeling Approaches Applied to Aggregation-Based Fleet Management and Integration of Plug-in Electric Vehicles †

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Ziras, Charalampos

    2016-01-01

    and systems are seen as useful tools to support the related studies for different stakeholders in a tangible way. This paper presents an overview of modeling approaches applied to support aggregation-based management and integration of PEVs from the perspective of fleet operators and grid operators...... management, and key systems, such as the PEV fleet, is then presented, along with a detailed description of different approaches. Finally, we discuss several considerations that need to be well understood during the modeling process in order to assist modelers and model users in the appropriate decisions...

  7. Application of modelling and nanotechnology-based approaches: The emergence of breakthroughs in theranostics of central nervous system disorders.

    Science.gov (United States)

    Hassanzadeh, Parichehr; Atyabi, Fatemeh; Dinarvand, Rassoul

    2017-08-01

    The limited efficiency of the current treatment options against the central nervous system (CNS) disorders has created increasing demands towards the development of novel theranostic strategies. The enormous research efforts in nanotechnology have led to the production of highly-advanced nanodevices and biomaterials in a variety of geometries and configurations for targeted delivery of genes, drugs, or growth factors across the blood-brain barrier. Meanwhile, the richness or reliability of data, drug delivery methods, therapeutic effects or potential toxicity of nanoparticles, occurrence of the unexpected phenomena due to the polydisperse or polymorphic nature of nanomaterials, and personalized theranostics have remained as challenging issues. In this respect, computational modelling has emerged as a powerful tool for rational design of nanoparticles with optimized characteristics including the selectivity, improved bioactivity, and reduced toxicity that might lead to the effective delivery of therapeutic agents. High-performance simulation techniques by shedding more light on the dynamical behaviour of neural networks and pathomechanisms of CNS disorders may provide imminent breakthroughs in nanomedicine. In the present review, the importance of integration of nanotechnology-based approaches with computational techniques for targeted delivery of theranostics to the CNS has been highlighted. Copyright © 2017. Published by Elsevier Inc.

  8. The Effect of Rehabilitation Method Based on Existential Approach and Olson\\'s Model on Marital Satisfaction

    Directory of Open Access Journals (Sweden)

    Maedeh Naghiyaee

    2014-09-01

    Full Text Available Objectives: Mastectomy as a treatment for breast cancer can disturb marital satisfaction of many couples. In this way, existential anxieties stemming from this potentially deleterious event, and inefficient responses to them, could be mediating. The purpose of this study is to investigate the effectiveness of a rehabilitation method based on existential approach and Olson's marital enrichment model on marital satisfaction of women who had undergone mastectomy and their husbands . Methods: In this study, a single subject research design is used. The study population comprised couples who had referred to Radiotherapy department of Imam Hussein hospital in Tehran, that among them three couples whose average age was 20 to 50 years old, wife's had undergone mastectomy, tumor has not spread to other parts of the body, and had no prior history of psychiatric disorders before cancer, were selected through purposeful sampling and Intervention in 12 sessions of 90 minutes once a week, has been designed to suit their specific needs. The level of couple's marital satisfaction was evaluated using Dyadic Adjustment Scale. Results: Comparing couple's scores on the diagram during 9 time measurement (3 times baseline, 4 times during intervention, and 2 times follow up assessment and calculating recovery percentage, represent increasing in score of marital adjustment scale. Discussion: So it seems that, this kind of an eclectic couple therapy, by considering couples existential anxiety, has been promoted their marital satisfaction. Explanations are given in discussion part .

  9. A Constructionist Approach to Student Modelling: Tracing a Student's Constructions through an Agent-Based Tutoring Architecture

    Science.gov (United States)

    Beuls, Katrien

    2013-01-01

    Construction Grammar (CxG) is a well-established linguistic theory that takes the notion of a construction as the basic unit of language. Yet, because the potential of this theory for language teaching or SLA has largely remained ignored, this paper demonstrates the benefits of adopting the CxG approach for modelling a student's linguistic…

  10. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    Science.gov (United States)

    Nicolae Lerma, Alexandre; Bulteau, Thomas; Elineau, Sylvain; Paris, François; Durand, Paul; Anselme, Brice; Pedreros, Rodrigo

    2018-01-01

    A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour) are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1) maximum spatial extent of flooded areas, (2) volumes of water propagation inland and (3) water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period) can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean sea level rise (0.6 m), hazard

  11. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    Directory of Open Access Journals (Sweden)

    A. Nicolae Lerma

    2018-01-01

    Full Text Available A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1 maximum spatial extent of flooded areas, (2 volumes of water propagation inland and (3 water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean

  12. Applying an Inverse Model to Estimate Ammonia Emissions at Cattle Feedlots Using Three Different Observation-Based Approaches

    Science.gov (United States)

    Shonkwiler, K. B.; Ham, J. M.; Nash, C.

    2014-12-01

    Accurately quantifying emissions of ammonia (NH3) from confined animal feeding operations (CAFOs) is vital not only to the livestock industry, but essential to understanding nitrogen cycling along the Front Range of Colorado, USA, where intensive agriculture, urban sprawl, and pristine ecosystems (e.g., Rocky Mtn Nat'l Park) lie within 100-km of each other. Most observation-based techniques for estimating NH3 emissions can be expensive and highly technical. Many methods rely on concentration observations on location, which implicitly depends on weather conditions. A system for sampling NH3 using on-site weather data was developed to allow remote measurement of NH3 in a simple, cost-effective way. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich) that provide time-averaged concentrations representative of a typical two-week deployment. Cartridge exposure is robotically managed so they are only visible when winds are 1.4 m/s or greater from the direction of the CAFO. These concentration data can be coupled with stability parameters (measured on-site) in a simple inverse model to estimate emissions (FIDES, UMR Environnement et Grandes Cultures). Few studies have directly compared emissions estimates of NH3 using concentration data obtained from multiple measurement systems at different temporal and spatial scales. Therefore, in the summer and autumn of 2014, several conditional sampler systems were deployed at a 25,000-head cattle feedlot concomitant with an open-path infrared laser (GasFinder2, Boreal Laser Inc.) and a Cavity Ring Down Spectrometer (CRDS) (G1103, Picarro Inc.) which each measured instantaneous NH3 concentrations. This study will test the sampler technology by first comparing concentration data from the three different methods. In livestock research, it is common to estimate NH3 emissions by using such instantaneous data in a backward Lagrangian stochastic (bLs) model (WindTrax, Thunder Beach Sci.) Considering this, NH3 fluxes

  13. What controls the stable isotope composition of precipitation in the Mekong Delta? A model-based statistical approach

    Science.gov (United States)

    Le Duy, Nguyen; Heidbüchel, Ingo; Meyer, Hanno; Merz, Bruno; Apel, Heiko

    2018-02-01

    for δ18O and δ2H, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors. The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.

  14. What controls the stable isotope composition of precipitation in the Mekong Delta? A model-based statistical approach

    Directory of Open Access Journals (Sweden)

    N. Le Duy

    2018-02-01

    place mainly in the dry season, either locally for δ18O and δ2H, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach, the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors. The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.

  15. THE DEVELOPMENT OF RESEARCH-BASED PHYSICS LEARNING MODEL WITH SCIENTIFIC APPROACH TO DEVELOP STUDENTS’ SCIENTIFIC PROCESSING SKILL

    Directory of Open Access Journals (Sweden)

    Usmeldi Usmeldi

    2016-04-01

    Full Text Available Physics learning in SMA N 2 Padang was implemented through theory and practicum for verifying the theories. The results of the initial survey showed that the physics teachers had not yet applied the research-based learning. Supporting facilities such as physics lab and its equipment has been already available, but it has not been utilized optimally. Research-based learning is a model that can improve scientific processing skills and learning outcomes of students. The research aimed to produce a valid, practical, and effective research-based physics learning model and devices. This research was a research and development using the 4D model of Thiagarajan. The instrument of this research are interview guides, observation sheets, sheet of validation of model and learning tools, questionnaire for both teachers’ and learners’ responses, assessment sheets for scientific processing skills, and achievement test. The results showed that the developed model and the learning devices according to the assessment of experts were declared valid. Model and learning devices were practical based on the observation and the questionnaires. The application of research-based physics learning could effectively improve scientific skills and learning outcomes of students. This model is suggested to physics teachers in high school in regard with implementing research-based learning.

  16. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Directory of Open Access Journals (Sweden)

    Samreen Laghari

    Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  17. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Science.gov (United States)

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  18. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  19. Reconstructing Organophosphorus Pesticide Doses Using the Reversed Dosimetry Approach in a Simple Physiologically-Based Pharmacokinetic Model

    Directory of Open Access Journals (Sweden)

    Chensheng Lu

    2012-01-01

    Full Text Available We illustrated the development of a simple pharmacokinetic (SPK model aiming to estimate the absorbed chlorpyrifos doses using urinary biomarker data, 3,5,6-trichlorpyridinol as the model input. The effectiveness of the SPK model in the pesticide risk assessment was evaluated by comparing dose estimates using different urinary composite data. The dose estimates resulting from the first morning voids appeared to be lower than but not significantly different to those using before bedtime, lunch or dinner voids. We found similar trend for dose estimates using three different urinary composite data. However, the dose estimates using the SPK model for individual children were significantly higher than those from the conventional physiologically based pharmacokinetic (PBPK modeling using aggregate environmental measurements of chlorpyrifos as the model inputs. The use of urinary data in the SPK model intuitively provided a plausible alternative to the conventional PBPK model in reconstructing the absorbed chlorpyrifos dose.

  20. GPU-based local interaction simulation approach for simplified temperature effect modelling in Lamb wave propagation used for damage detection

    International Nuclear Information System (INIS)

    Kijanka, P; Radecki, R; Packo, P; Staszewski, W J; Uhl, T

    2013-01-01

    Temperature has a significant effect on Lamb wave propagation. It is important to compensate for this effect when the method is considered for structural damage detection. The paper explores a newly proposed, very efficient numerical simulation tool for Lamb wave propagation modelling in aluminum plates exposed to temperature changes. A local interaction approach implemented with a parallel computing architecture and graphics cards is used for these numerical simulations. The numerical results are compared with the experimental data. The results demonstrate that the proposed approach could be used efficiently to produce a large database required for the development of various temperature compensation procedures in structural health monitoring applications. (paper)

  1. Risk-based technical specifications: Development and application of an approach to the generation of a plant specific real-time risk model

    International Nuclear Information System (INIS)

    Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.

    1992-10-01

    This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model

  2. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  3. RSM and ANN modeling-based optimization approach for the development of ultrasound-assisted liposome encapsulation of piceid.

    Science.gov (United States)

    Huang, Shang-Ming; Kuo, Chia-Hung; Chen, Chun-An; Liu, Yung-Chuan; Shieh, Chwen-Jen

    2017-05-01

    Piceid, a naturally occurring derivative of resveratrol found in many plants, has recently been considered as a potential nutraceutical. However, its poorly water-soluble property could cause a coupled problem of biological activities concerning drug dispersion and absorption in human body, which is still unsolved now. Liposome, a well-known aqueous carrier for water-insoluble ingredients, is commonly applied in drug delivery systems. In this study, a feasible approach for solving the problem is that the targeted piceid was encapsulated into a liposomal formula as aqueous substrate to overcome its poor water-solubility. The encapsulation process was assisted by ultrasound, with investigation of lipid content, ultrasound power and ultrasound time, for controlling encapsulation efficiency (E.E%), absolute loading (A.L%) and particle size (PS). Moreover, both RSM and ANN methodologies were further applied to optimize the ultrasound-assisted encapsulation process. The data indicated that the most important effects on the encapsulation performance were found to be of lipid content followed by ultrasound time and ultrasound power. The maximum E.E% (75.82%) and A.L% (2.37%) were exhibited by ultrasound assistance with the parameters of 160mg lipid content, ultrasound time for 24min and ultrasound power of 90W. By methodological aspects of processing, the predicted E.E% and A.L% were respectively in good agreement with the experimental results for both RSM and ANN. Moreover, RMSE, R 2 and AAD statistics were further used to compare the prediction abilities of RSM and ANN based on the validation data set. The results indicated that the prediction accuracy of ANN was better than that of RSM. In conclusion, ultrasound-assisted liposome encapsulation can be an efficient strategy for producing well-soluble/dispersed piceid, which could be further applied to promote human health by increased efficiency of biological absorption, and the process of ultrasound-mediated liposome

  4. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  5. Exploring the Utility of Logistic Mixed Modeling Approaches to Simultaneously Investigate Item and Testlet DIF on Testlet-based Data.

    Science.gov (United States)

    Fukuhara, Hirotaka; Paek, Insu

    2016-01-01

    This study explored the utility of logistic mixed models for the analysis of differential item functioning when item response data were testlet-based. Decomposition of differential item functioning (DIF) into item level and testlet level for the testlet-based data was introduced to separate possible sources of DIF: (1) an item, (2) a testlet, and (3) both the item and the testlet. Simulation study was conducted to investigate the performance of several logistic mixed models as well as the Mantel-Haenszel method under the conditions, in which the item-related DIF and testlet-related DIF were present simultaneously. The results revealed that a new DIF model based on a logistic mixed model with random item effects and item covariates could capture the item-related DIF and testlet-related DIF well under certain conditions.

  6. A wavelet-based Bayesian approach to regression models with long memory errors and its application to FMRI data.

    Science.gov (United States)

    Jeong, Jaesik; Vannucci, Marina; Ko, Kyungduk

    2013-03-01

    This article considers linear regression models with long memory errors. These models have been proven useful for application in many areas, such as medical imaging, signal processing, and econometrics. Wavelets, being self-similar, have a strong connection to long memory data. Here we employ discrete wavelet transforms as whitening filters to simplify the dense variance-covariance matrix of the data. We then adopt a Bayesian approach for the estimation of the model parameters. Our inferential procedure uses exact wavelet coefficients variances and leads to accurate estimates of the model parameters. We explore performances on simulated data and present an application to an fMRI data set. In the application we produce posterior probability maps (PPMs) that aid interpretation by identifying voxels that are likely activated with a given confidence. Copyright © 2013, The International Biometric Society.

  7. On the Formal Modeling of Games of Language and Adversarial Argumentation : A Logic-Based Artificial Intelligence Approach

    OpenAIRE

    Eriksson Lundström, Jenny S. Z.

    2009-01-01

    Argumentation is a highly dynamical and dialectical process drawing on human cognition. Successful argumentation is ubiquitous to human interaction. Comprehensive formal modeling and analysis of argumentation presupposes a dynamical approach to the following phenomena: the deductive logic notion, the dialectical notion and the cognitive notion of justified belief. For each step of an argumentation these phenomena form networks of rules which determine the propositions to be allowed to make se...

  8. An artificial intelligence approach for modeling molecular self-assembly: agent-based simulations of rigid molecules.

    Science.gov (United States)

    Fortuna, Sara; Troisi, Alessandro

    2009-07-23

    Agent-based simulations are rule-based models traditionally used for the simulations of complex systems. In this paper, an algorithm based on the concept of agent-based simulations is developed to predict the lowest energy packing of a set of identical rigid molecules. The agents are identified with rigid portions of the system under investigation, and they evolve following a set of rules designed to drive the system toward the lowest energy minimum. The algorithm is compared with a conventional Metropolis Monte Carlo algorithm, and it is applied on a large set of representative models of molecules. For all the systems studied, the agent-based method consistently finds a significantly lower energy minima than the Monte Carlo algorithm because the system evolution includes elements of adaptation (new configurations induce new types of moves) and learning (past successful choices are repeated).

  9. Integrated Case-Based Applied Pathology (ICAP): a diagnostic-approach model for the learning and teaching of veterinary pathology.

    Science.gov (United States)

    Krockenberger, Mark B; Bosward, Katrina L; Canfield, Paul J

    2007-01-01

    Integrative Case-Based Applied Pathology (ICAP) cases form one component of learning and understanding the role of pathology in the veterinary diagnostic process at the Faculty of Veterinary Science, University of Sydney. It is a strategy that focuses on student-centered learning in a problem-solving context in the year 3 curriculum. Learning exercises use real case material and are primarily delivered online, providing flexibility for students with differing learning needs, who are supported by online, peer, and tutor support. The strategy relies heavily on the integration of pre-clinical and para-clinical information with the introduction of clinical material for the purposes of a logical three-level, problem-oriented approach to the diagnosis of disease. The focus is on logical diagnostic problem solving, primarily using gross pathology and histopathological material, with the inclusion of microbiological, parasitological, and clinical pathological data. The ICAP approach is linked to and congruent with the problem-oriented approach adopted in veterinary medicine and the case-based format used by one of the authors (PJC) for the teaching and learning of veterinary clinical pathology in year 4. Additionally, final-year students have the opportunity, during a diagnostic pathology rotation, to assist in the development and refinement of further ICAPs, which reinforces the importance of pathology in the veterinary diagnostic process. Evidence of the impact of the ICAP approach, based primarily on student surveys and staff peer feedback collected over five years, shows that discipline-specific learning, vertical and horizontal integration, alignment of learning outcomes and assessment, and both veterinary and generic graduate attributes were enhanced. Areas for improvement were identified in the approach, most specifically related to assistance in the development of generic teamwork skills.

  10. Analyzing Students' Learning Progressions Throughout a Teaching Sequence on Acoustic Properties of Materials with a Model-Based Inquiry Approach

    Science.gov (United States)

    Hernández, María Isabel; Couso, Digna; Pintó, Roser

    2015-04-01

    The study we have carried out aims to characterize 15- to 16-year-old students' learning progressions throughout the implementation of a teaching-learning sequence on the acoustic properties of materials. Our purpose is to better understand students' modeling processes about this topic and to identify how the instructional design and actual enactment influences students' learning progressions. This article presents the design principles which elicit the structure and types of modeling and inquiry activities designed to promote students' development of three conceptual models. Some of these activities are enhanced by the use of ICT such as sound level meters connected to data capture systems, which facilitate the measurement of the intensity level of sound emitted by a sound source and transmitted through different materials. Framing this study within the design-based research paradigm, it consists of the experimentation of the designed teaching sequence with two groups of students ( n = 29) in their science classes. The analysis of students' written productions together with classroom observations of the implementation of the teaching sequence allowed characterizing students' development of the conceptual models. Moreover, we could evidence the influence of different modeling and inquiry activities on students' development of the conceptual models, identifying those that have a major impact on students' modeling processes. Having evidenced different levels of development of each conceptual model, our results have been interpreted in terms of the attributes of each conceptual model, the distance between students' preliminary mental models and the intended conceptual models, and the instructional design and enactment.

  11. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...

  12. Gaussian-Based Smooth Dielectric Function: A Surface-Free Approach for Modeling Macromolecular Binding in Solvents

    Directory of Open Access Journals (Sweden)

    Arghya Chakravorty

    2018-03-01

    Full Text Available Conventional modeling techniques to model macromolecular solvation and its effect on binding in the framework of Poisson-Boltzmann based implicit solvent models make use of a geometrically defined surface to depict the separation of macromolecular interior (low dielectric constant from the solvent phase (high dielectric constant. Though this simplification saves time and computational resources without significantly compromising the accuracy of free energy calculations, it bypasses some of the key physio-chemical properties of the solute-solvent interface, e.g., the altered flexibility of water molecules and that of side chains at the interface, which results in dielectric properties different from both bulk water and macromolecular interior, respectively. Here we present a Gaussian-based smooth dielectric model, an inhomogeneous dielectric distribution model that mimics the effect of macromolecular flexibility and captures the altered properties of surface bound water molecules. Thus, the model delivers a smooth transition of dielectric properties from the macromolecular interior to the solvent phase, eliminating any unphysical surface separating the two phases. Using various examples of macromolecular binding, we demonstrate its utility and illustrate the comparison with the conventional 2-dielectric model. We also showcase some additional abilities of this model, viz. to account for the effect of electrolytes in the solution and to render the distribution profile of water across a lipid membrane.

  13. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    Science.gov (United States)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly

  14. A Hybrid Dual-Source Model of Estimating Evapotranspiration over Different Ecosystems and Implications for Satellite-Based Approaches

    Directory of Open Access Journals (Sweden)

    Hanyu Lu

    2014-09-01

    Full Text Available Accurate estimation of evapotranspiration (ET and its components is critical to developing a better understanding of climate, hydrology, and vegetation coverage conditions for areas of interest. A hybrid dual-source (H-D model incorporating the strengths of the two-layer and two-patch schemes was proposed to estimate actual ET processes by considering varying vegetation coverage patterns and soil moisture conditions. The proposed model was tested in four different ecosystems, including deciduous broadleaf forest, woody savannas, grassland, and cropland. Performance of the H-D model was compared with that of the Penman-Monteith (P-M model, the Shuttleworth-Wallace (S-W model, as well as the Two-Patch (T-P model, with ET and/or its components (i.e., transpiration and evaporation being evaluated against eddy covariance measurements. Overall, ET estimates from the developed H-D model agreed reasonably well with the ground-based measurements at all sites, with mean absolute errors ranging from 16.3 W/m2 to 38.6 W/m2, indicating good performance of the H-D model in all ecosystems being tested. In addition, the H-D model provides a more reasonable partitioning of evaporation and transpiration than other models in the ecosystems tested.

  15. Indoor Residual Spraying Delivery Models to Prevent Malaria: Comparison of Community- and District-Based Approaches in Ethiopia

    Science.gov (United States)

    Johns, Benjamin; Yihdego, Yemane Yeebiyo; Kolyada, Lena; Dengela, Dereje; Chibsa, Sheleme; Dissanayake, Gunawardena; George, Kristen; Taffese, Hiwot Solomon; Lucas, Bradford

    2016-01-01

    ABSTRACT Background: Indoor residual spraying (IRS) for malaria prevention has traditionally been implemented in Ethiopia by the district health office with technical and operational inputs from regional, zonal, and central health offices. The United States President's Malaria Initiative (PMI) in collaboration with the Government of Ethiopia tested the effectiveness and efficiency of integrating IRS into the government-funded community-based rural health services program. Methods: Between 2012 and 2014, PMI conducted a mixed-methods study in 11 districts of Oromia region to compare district-based IRS (DB IRS) and community-based IRS (CB IRS) models. In the DB IRS model, each district included 2 centrally located operational sites where spray teams camped during the IRS campaign and from which they traveled to the villages to conduct spraying. In the CB IRS model, spray team members were hired from the communities in which they operated, thus eliminating the need for transport and camping facilities. The study team evaluated spray coverage, the quality of spraying, compliance with environmental and safety standards, and cost and performance efficiency. Results: The average number of eligible structures found and sprayed in the CB IRS districts increased by 19.6% and 20.3%, respectively, between 2012 (before CB IRS) and 2013 (during CB IRS). Between 2013 and 2014, the numbers increased by about 14%. In contrast, in the DB IRS districts the number of eligible structures found increased by only 8.1% between 2012 and 2013 and by 0.4% between 2013 and 2014. The quality of CB IRS operations was good and comparable to that in the DB IRS model, according to wall bioassay tests. Some compliance issues in the first year of CB IRS implementation were corrected in the second year, bringing compliance up to the level of the DB IRS model. The CB IRS model had, on average, higher amortized costs per district than the DB IRS model but lower unit costs per structure sprayed and per

  16. Filling gaps in notification data: a model-based approach applied to travel related campylobacteriosis cases in New Zealand.

    Science.gov (United States)

    Amene, E; Horn, B; Pirie, R; Lake, R; Döpfer, D

    2016-09-06

    Data containing notified cases of disease are often compromised by incomplete or partial information related to individual cases. In an effort to enhance the value of information from enteric disease notifications in New Zealand, this study explored the use of Bayesian and Multiple Imputation (MI) models to fill risk factor data gaps. As a test case, overseas travel as a risk factor for infection with campylobacteriosis has been examined. Two methods, namely Bayesian Specification (BAS) and Multiple Imputation (MI), were compared regarding predictive performance for various levels of artificially induced missingness of overseas travel status in campylobacteriosis notification data. Predictive performance of the models was assessed through the Brier Score, the Area Under the ROC Curve and the Percent Bias of regression coefficients. Finally, the best model was selected and applied to predict missing overseas travel status of campylobacteriosis notifications. While no difference was observed in the predictive performance of the BAS and MI methods at a lower rate of missingness (campylobacteriosis cases was estimated at 0.16 (0.02, 0.48). The use of BAS offers a flexible approach to data augmentation particularly when the missing rate is very high and when the Missing At Random (MAR) assumption holds. High rates of travel associated cases in urban regions of New Zealand predicted by this approach are plausible given the high rate of travel in these regions, including destinations with higher risk of infection. The added advantage of using a Bayesian approach is that the model's prediction can be improved whenever new information becomes available.

  17. An Integrated Approach Based on Numerical Modelling and Geophysical Survey to Map Groundwater Salinity in Fractured Coastal Aquifers

    Directory of Open Access Journals (Sweden)

    Costantino Masciopinto

    2017-11-01

    Full Text Available Aquifer over-exploitation may increase coastal seawater intrusion by reducing freshwater availability. Fractured subsurface formations commonly host important freshwater reservoirs along sea coasts. These water resources are particularly vulnerable to the contamination due to seawater infiltration occurring through rapid pathways via fractures. Modeling of density driven fluid flow in fractured aquifers is complex, as their hydrodynamics are controlled by interactions between preferential flow pathways, 3D interconnected fractures and rock-matrix porosity distribution. Moreover, physical heterogeneities produce highly localized water infiltrations that make the modeling of saltwater transport in such aquifers very challenging. The new approach described in this work provides a reliable hydrogeological model suitable to reproduce local advancements of the freshwater/saltwater wedge in coastal aquifers. The proposed model use flow simulation results to estimate water salinities in groundwater at a specific depth (1 m below water table by means of positions of the Ghyben-Herzberg saltwater/freshwater sharp interface along the coast. Measurements of salinity in 25 boreholes (i.e., salinity profiles have been used for the model calibration. The results provide the groundwater salinity map in freshwater/saltwater transition coastal zones of the Bari (Southern Italy fractured aquifer. Non-invasive geophysical measurements in groundwater, particularly into vertical 2D vertical cross-sections, were carried out by using the electrical resistivity tomography (ERT in order to validate the model results. The presented integrated approach is very easy to apply and gives very realistic salinity maps in heterogeneous aquifers, without simulating density driven water flow in fractures.

  18. Building a model for disease classification integration in oncology, an approach based on the national cancer institute thesaurus.

    Science.gov (United States)

    Jouhet, Vianney; Mougin, Fleur; Bréchat, Bérénice; Thiessard, Frantz

    2017-02-07

    Identifying incident cancer cases within a population remains essential for scientific research in oncology. Data produced within electronic health records can be useful for this purpose. Due to the multiplicity of providers, heterogeneous terminologies such as ICD-10 and ICD-O-3 are used for oncology diagnosis recording purpose. To enable disease identification based on these diagnoses, there is a need for integrating disease classifications in oncology. Our aim was to build a model integrating concepts involved in two disease classifications, namely ICD-10 (diagnosis) and ICD-O-3 (topography and morphology), despite their structural heterogeneity. Based on the NCIt, a "derivative" model for linking diagnosis and topography-morphology combinations was defined and built. ICD-O-3 and ICD-10 codes were then used to instantiate classes of the "derivative" model. Links between terminologies obtained through the model were then compared to mappings provided by the Surveillance, Epidemiology, and End Results (SEER) program. The model integrated 42% of neoplasm ICD-10 codes (excluding metastasis), 98% of ICD-O-3 morphology codes (excluding metastasis) and 68% of ICD-O-3 topography codes. For every codes instantiating at least a class in the "derivative" model, comparison with SEER mappings reveals that all mappings were actually available in the model as a link between the corresponding codes. We have proposed a method to automatically build a model for integrating ICD-10 and ICD-O-3 based on the NCIt. The resulting "derivative" model is a machine understandable resource that enables an integrated view of these heterogeneous terminologies. The NCIt structure and the available relationships can help to bridge disease classifications taking into account their structural and granular heterogeneities. However, (i) inconsistencies exist within the NCIt leading to misclassifications in the "derivative" model, (ii) the "derivative" model only integrates a part of ICD-10 and ICD