WorldWideScience

Sample records for modeling studies based

  1. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  2. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  3. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  4. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  5. Model-based design languages: A case study

    OpenAIRE

    Cibrario Bertolotti, Ivan; Hu, Tingting; Navet, Nicolas

    2017-01-01

    Fast-paced innovation in the embedded systems domain puts an ever increasing pressure on effective software development methods, leading to the growing popularity of Model-Based Design (MBD). In this context, a proper choice of modeling languages and related tools - depending on design goals and problem qualities - is crucial to make the most of MBD benefits. In this paper, a comparison between two dissimilar approaches to modeling is carried out, with the goal of highlighting their relative ...

  6. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  7. Injury Based on Its Study in Experimental Models

    Directory of Open Access Journals (Sweden)

    M. Mendes-Braz

    2012-01-01

    Full Text Available The present review focuses on the numerous experimental models used to study the complexity of hepatic ischemia/reperfusion (I/R injury. Although experimental models of hepatic I/R injury represent a compromise between the clinical reality and experimental simplification, the clinical transfer of experimental results is problematic because of anatomical and physiological differences and the inevitable simplification of experimental work. In this review, the strengths and limitations of the various models of hepatic I/R are discussed. Several strategies to protect the liver from I/R injury have been developed in animal models and, some of these, might find their way into clinical practice. We also attempt to highlight the fact that the mechanisms responsible for hepatic I/R injury depend on the experimental model used, and therefore the therapeutic strategies also differ according to the model used. Thus, the choice of model must therefore be adapted to the clinical question being answered.

  8. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  9. Prospects of Source-Separation-Based Sanitation Concepts: A Model-Based Study

    NARCIS (Netherlands)

    Tervahauta, T.H.; Trang Hoang,; Hernández, L.; Zeeman, G.; Buisman, C.J.N.

    2013-01-01

    Separation of different domestic wastewater streams and targeted on-site treatment for resource recovery has been recognized as one of the most promising sanitation concepts to re-establish the balance in carbon, nutrient and water cycles. In this study a model was developed based on literature data

  10. In Vivo RNAi-Based Screens: Studies in Model Organisms

    Directory of Open Access Journals (Sweden)

    Miki Yamamoto-Hino

    2013-11-01

    Full Text Available RNA interference (RNAi is a technique widely used for gene silencing in organisms and cultured cells, and depends on sequence homology between double-stranded RNA (dsRNA and target mRNA molecules. Numerous cell-based genome-wide screens have successfully identified novel genes involved in various biological processes, including signal transduction, cell viability/death, and cell morphology. However, cell-based screens cannot address cellular processes such as development, behavior, and immunity. Drosophila and Caenorhabditis elegans are two model organisms whose whole bodies and individual body parts have been subjected to RNAi-based genome-wide screening. Moreover, Drosophila RNAi allows the manipulation of gene function in a spatiotemporal manner when it is implemented using the Gal4/UAS system. Using this inducible RNAi technique, various large-scale screens have been performed in Drosophila, demonstrating that the method is straightforward and valuable. However, accumulated results reveal that the results of RNAi-based screens have relatively high levels of error, such as false positives and negatives. Here, we review in vivo RNAi screens in Drosophila and the methods that could be used to remove ambiguity from screening results.

  11. A Physics-Based Modeling Framework for Prognostic Studies

    Science.gov (United States)

    Kulkarni, Chetan S.

    2014-01-01

    Prognostics and Health Management (PHM) methodologies have emerged as one of the key enablers for achieving efficient system level maintenance as part of a busy operations schedule, and lowering overall life cycle costs. PHM is also emerging as a high-priority issue in critical applications, where the focus is on conducting fundamental research in the field of integrated systems health management. The term diagnostics relates to the ability to detect and isolate faults or failures in a system. Prognostics on the other hand is the process of predicting health condition and remaining useful life based on current state, previous conditions and future operating conditions. PHM methods combine sensing, data collection, interpretation of environmental, operational, and performance related parameters to indicate systems health under its actual application conditions. The development of prognostics methodologies for the electronics field has become more important as more electrical systems are being used to replace traditional systems in several applications in the aeronautics, maritime, and automotive fields. The development of prognostics methods for electronics presents several challenges due to the great variety of components used in a system, a continuous development of new electronics technologies, and a general lack of understanding of how electronics fail. Similarly with electric unmanned aerial vehicles, electrichybrid cars, and commercial passenger aircraft, we are witnessing a drastic increase in the usage of batteries to power vehicles. However, for battery-powered vehicles to operate at maximum efficiency and reliability, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. We develop an electrochemistry-based model of Li-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  13. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  14. Comprehensive Study of the Model Mercury-Based Cuprate Superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Greven, Martin [Univ. of Minnesota, Minneapolis, MN (United States)

    2017-11-13

    This is the Final Report on DE-SC0006858, which opened 15 August 2011 and closed 14 August 2017. The Principal Investigator is Martin Greven, School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 555455 (email: greven@umn.edu). The Administrative Point of Contact is Patricia Jondahl, phone: 612-624-5599, email: awards@umn.edu. The DOE Program is the Office of Basic Energy Sciences, Program manager is Dr. P. Thiyagarajan, Neutron Scattering SC-22.2/ Germantown Bldg. (email: Thiyagarajan@science.doe.gov). The chief activity was the crystal growth, characterization, neutron and X-ray scattering study of the mercury-based cuprates, arguably the most desirable high-Tc superconductors for experimental study due to their record values of Tc and their relatively simple crystal structures. It is thought that the unusual magnetic and charge degrees of freedom of the copper-oxygen sheets that form the fundamental building block of all cuprate superconductors give rise to the high Tc and to many other unusual properties exhibited by the class of quantum materials. Neutron scattering experiments were performed to reveal the nature of the magnetic degrees of freedom of the copper-oxygen sheets, whereas X-ray scattering experiments and complementary charge-transport experiments were performed to reveal the nature of the charge degrees of freedom. In addition, collaborations were initiated with experts in the use of complementary experimental techniques. The primary products are (i) scientific articles published in peer-reviewed scientific journals, (ii) scientific presentations at national and international conferences, and (iii) education of postdoctoral researchers, PhD graduate students and undergraduate researchers by providing a research experience in crystal growth, characterization and scattering. Twenty scientific papers were published in peer-reviewed journals, thirty-one invited talks were presented at national or international conferences, or as

  15. A Study of Crisis Management Based on Stakeholders Analysis Model

    Science.gov (United States)

    Qingchun, Yue

    2017-11-01

    From the view of stakeholder theory, not only the enterprises should provide services to shareholders, but also take care of the demands of stakeholders. Stakeholders for the enterprise crisis are the organizations and individuals, which cause crisis, respond to the crisis and affected by the enterprise crisis. In this paper, first of all, to comb the development of stakeholder theory systematically; secondly, with the help of the enterprise crisis stakeholder analysis model, analyze the concept of stakeholders for the enterprise crisis and membership, and with the example of Shuanghui Group for further analysis; finally, we put forward relevant proposals for the enterprise crisis from the view of stakeholders.

  16. Pharmacokinetic study of medicinal polymers: models based on dextrans

    International Nuclear Information System (INIS)

    Kulakov, V.N.; Pimenova, G.N.; Matveev, V.A.; Sedov, V.V.; Vasil'ev, A.E.

    1986-01-01

    The authors study the pharmacokinetics of dextrans with various molecular masses modified by fluorescein isothiocyanate (FITC) using a radioisotope method. The radionuclide 125 I was selectively bound to a FITC residue attached to the polysaccharide by electrochemical iodination under potentiostatic conditions. In the experiments, dextrans modified by FITC were labeled with 125 I (DF- 125 I) by electrochemical iodination. The separation of DF- 125 I and FITC from ionic forms of the radionuclide not bound to the polymer was carried out. The properties of the samples obtained are presented. The radioactivity accumulated in the rate organs and urine studied are shown. The features of DF- 125 I behavior in the blood and liver are examined

  17. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser

    2010-09-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  18. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser; Mukaddam, Mohsin Ahmed; Schwingenschlö gl, Udo

    2010-01-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  19. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    2011-01-01

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…

  20. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  1. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  2. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    Science.gov (United States)

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  3. Prospects of Source-Separation-Based Sanitation Concepts: A Model-Based Study

    Directory of Open Access Journals (Sweden)

    Cees Buisman

    2013-07-01

    Full Text Available Separation of different domestic wastewater streams and targeted on-site treatment for resource recovery has been recognized as one of the most promising sanitation concepts to re-establish the balance in carbon, nutrient and water cycles. In this study a model was developed based on literature data to compare energy and water balance, nutrient recovery, chemical use, effluent quality and land area requirement in four different sanitation concepts: (1 centralized; (2 centralized with source-separation of urine; (3 source-separation of black water, kitchen refuse and grey water; and (4 source-separation of urine, feces, kitchen refuse and grey water. The highest primary energy consumption of 914 MJ/capita(cap/year was attained within the centralized sanitation concept, and the lowest primary energy consumption of 437 MJ/cap/year was attained within source-separation of urine, feces, kitchen refuse and grey water. Grey water bio-flocculation and subsequent grey water sludge co-digestion decreased the primary energy consumption, but was not energetically favorable to couple with grey water effluent reuse. Source-separation of urine improved the energy balance, nutrient recovery and effluent quality, but required larger land area and higher chemical use in the centralized concept.

  4. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  5. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  6. Model of Values-Based Management Process in Schools: A Mixed Design Study

    Science.gov (United States)

    Dogan, Soner

    2016-01-01

    The aim of this paper is to evaluate the school administrators' values-based management behaviours according to the teachers' perceptions and opinions and, accordingly, to build a model of values-based management process in schools. The study was conducted using explanatory design which is inclusive of both quantitative and qualitative methods.…

  7. Society by Numbers : Studies on Model-Based Explanations in the Social Sciences

    OpenAIRE

    Kuorikoski, Jaakko

    2010-01-01

    The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This th...

  8. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  9. Feasibility model study for Blumbangreksa product model based on lean startup method

    Science.gov (United States)

    Pakpahan, A. K.; Dewobroto, W. S.; Pratama, R. Y.

    2017-12-01

    Based on the data from Ministry of Maritime Affairs and Fisheries in 2015, the productivity of shrimp farmers in Indonesia is still below China, India and Thailand, because of the low survival rate of shrimp seeds were planted in Indonesia. Water quality factors become a significant factor that increasesthe survival rate of shrimp seeds plantation, therefore team of PT. Atnic EkoteknoWicaksana create a tool called Blumbangreksa that able to monitor water quality of shrimp farms, measure temperature, salinity, pH, DO (dissolved oxygen), TDS (total dissolve solid) in water and moist air over the surface of the water and GSM -based and Internet of things. Based on the research results, unique value proposition of Blumbangreksa products is the measurement result of water quality are accurate, real-time measurements, based on Internet of things and have the ability measurements at once. Based on the feasibility study using the opportunity assessment of Marty Cagan, it can be seen that the product has fulfilled ten elements of assessment opportunity, so Blumbangreksa products are considered feasible. Initial investment fund of Blumbangreksa products is Rp 1,369,856,574, with profitability index of 1:51 and average breakeven products each year as many as 18 products are sold, and the payback period for 4 years and 2 months, therefore the business of Blumbangreksa product is feasible.

  10. Tourism Village Model Based on Local Indigenous: Case Study of Nongkosawit Tourism Village, Gunungpati, Semarang

    Science.gov (United States)

    Kurniasih; Nihayah, Dyah Maya; Sudibyo, Syafitri Amalia; Winda, Fajri Nur

    2018-02-01

    Officially, Nongkosawit Village has become a tourism village since 2012. However, the economic impact has not been received by the society yet because of inappropriate tourism village model. Therefore, this study aims to find out the best model for the development of Nongkosawit Tourism Village. This research used Analytical Hierarchy Process method. The results of this research shows that the model of tourism village which was suitable to the local indigenous of Nongkosawit Tourism Village was the cultural based tourism village with the percentage of 58%. Therefore, it is necessary to do re-orientation from the natural-based village model into the cultural-based village model by raising and exploring the existing culture through unique and different tourism products.

  11. An Empirical Rate Constant Based Model to Study Capacity Fading in Lithium Ion Batteries

    Directory of Open Access Journals (Sweden)

    Srivatsan Ramesh

    2015-01-01

    Full Text Available A one-dimensional model based on solvent diffusion and kinetics to study the formation of the SEI (solid electrolyte interphase layer and its impact on the capacity of a lithium ion battery is developed. The model uses the earlier work on silicon oxidation but studies the kinetic limitations of the SEI growth process. The rate constant of the SEI formation reaction at the anode is seen to play a major role in film formation. The kinetics of the reactions for capacity fading for various battery systems are studied and the rate constants are evaluated. The model is used to fit the capacity fade in different battery systems.

  12. Experimental and Computer Modelling Studies of Metastability of Amorphous Silicon Based Solar Cells

    NARCIS (Netherlands)

    Munyeme, Geoffrey

    2003-01-01

    We present a combination of experimental and computer modelling studies of the light induced degradation in the performance of amorphous silicon based single junction solar cells. Of particular interest in this study is the degradation kinetics of different types of amorphous silicon single junction

  13. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  14. Study on Software Quality Improvement based on Rayleigh Model and PDCA Model

    OpenAIRE

    Ning Jingfeng; Hu Ming

    2013-01-01

    As the software industry gradually becomes mature, software quality is regarded as the life of a software enterprise. This article discusses how to improve the quality of software, applies Rayleigh model and PDCA model to the software quality management, combines with the defect removal effectiveness index, exerts PDCA model to solve the problem of quality management objectives when using the Rayleigh model in bidirectional quality improvement strategies of software quality management, a...

  15. Numerical Study of Wind Turbine Wake Modeling Based on a Actuator Surface Model

    DEFF Research Database (Denmark)

    Zhou, Huai-yang; Xu, Chang; Han, Xing Xing

    2017-01-01

    In the Actuator Surface Model (ALM), the turbine blades are represented by porous surfaces of velocity and pressure discontinuities to model the action of lifting surfaces on the flow. The numerical simulation is implemented on FLUENT platform combined with N-S equations. This model is improved o...

  16. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  17. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  18. Chinese Students' Goal Orientation in English Learning: A Study Based on Autonomous Inquiry Model

    Science.gov (United States)

    Zhang, Jianfeng

    2014-01-01

    Goal orientation is a kind of theory of learning motivation, which helps learners to develop their capability by emphasis on new techniques acquiring and environment adapting. In this study, based on the autonomous inquiry model, the construction of Chinese students' goal orientations in English learning are summarized according to the data…

  19. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  20. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  1. Cost Analysis of Prenatal Care Using the Activity-Based Costing Model: A Pilot Study

    Science.gov (United States)

    Gesse, Theresa; Golembeski, Susan; Potter, Jonell

    1999-01-01

    The cost of prenatal care in a private nurse-midwifery practice was examined using the activity-based costing system. Findings suggest that the activities of the nurse-midwife (the health care provider) constitute the major cost driver of this practice and that the model of care and associated, time-related activities influence the cost. This pilot study information will be used in the development of a comparative study of prenatal care, client education, and self care. PMID:22945985

  2. Cost analysis of prenatal care using the activity-based costing model: a pilot study.

    Science.gov (United States)

    Gesse, T; Golembeski, S; Potter, J

    1999-01-01

    The cost of prenatal care in a private nurse-midwifery practice was examined using the activity-based costing system. Findings suggest that the activities of the nurse-midwife (the health care provider) constitute the major cost driver of this practice and that the model of care and associated, time-related activities influence the cost. This pilot study information will be used in the development of a comparative study of prenatal care, client education, and self care.

  3. Improved workflow modelling using role activity diagram-based modelling with application to a radiology service case study.

    Science.gov (United States)

    Shukla, Nagesh; Keast, John E; Ceglarek, Darek

    2014-10-01

    The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A study of tumour growth based on stoichiometric principles: a continuous model and its discrete analogue.

    Science.gov (United States)

    Saleem, M; Agrawal, Tanuja; Anees, Afzal

    2014-01-01

    In this paper, we consider a continuous mathematically tractable model and its discrete analogue for the tumour growth. The model formulation is based on stoichiometric principles considering tumour-immune cell interactions in potassium (K (+))-limited environment. Our both continuous and discrete models illustrate 'cancer immunoediting' as a dynamic process having all three phases namely elimination, equilibrium and escape. The stoichiometric principles introduced into the model allow us to study its dynamics with the variation in the total potassium in the surrounding of the tumour region. It is found that an increase in the total potassium may help the patient fight the disease for a longer period of time. This result seems to be in line with the protective role of the potassium against the risk of pancreatic cancer as has been reported by Bravi et al. [Dietary intake of selected micronutrients and risk of pancreatic cancer: An Italian case-control study, Ann. Oncol. 22 (2011), pp. 202-206].

  5. Material model of pelvic bone based on modal analysis: a study on the composite bone.

    Science.gov (United States)

    Henyš, Petr; Čapek, Lukáš

    2017-02-01

    Digital models based on finite element (FE) analysis are widely used in orthopaedics to predict the stress or strain in the bone due to bone-implant interaction. The usability of the model depends strongly on the bone material description. The material model that is most commonly used is based on a constant Young's modulus or on the apparent density of bone obtained from computer tomography (CT) data. The Young's modulus of bone is described in many experimental works with large variations in the results. The concept of measuring and validating the material model of the pelvic bone based on modal analysis is introduced in this pilot study. The modal frequencies, damping, and shapes of the composite bone were measured precisely by an impact hammer at 239 points. An FE model was built using the data pertaining to the geometry and apparent density obtained from the CT of the composite bone. The isotropic homogeneous Young's modulus and Poisson's ratio of the cortical and trabecular bone were estimated from the optimisation procedure including Gaussian statistical properties. The performance of the updated model was investigated through the sensitivity analysis of the natural frequencies with respect to the material parameters. The maximal error between the numerical and experimental natural frequencies of the bone reached 1.74 % in the first modal shape. Finally, the optimised parameters were matched with the data sheets of the composite bone. The maximal difference between the calibrated material properties and that obtained from the data sheet was 34 %. The optimisation scheme of the FE model based on the modal analysis data provides extremely useful calibration of the FE models with the uncertainty bounds and without the influence of the boundary conditions.

  6. Study on evaluation method for heterogeneous sedimentary rocks based on forward model

    International Nuclear Information System (INIS)

    Masui, Yasuhiro; Kawada, Koji; Katoh, Arata; Tsuji, Takashi; Suwabe, Mizue

    2004-02-01

    It is very important to estimate the facies distribution of heterogeneous sedimentary rocks for geological disposal of high level radioactive waste. The heterogeneousness of sedimentary rocks is due to variable distribution of grain size and mineral composition. The objective of this study is to establish the evaluation method for heterogeneous sedimentary rocks based on forward model. This study consisted of geological study for Horonobe area and the development of soft wear for sedimentary model. Geological study was composed of following items. 1. The sedimentary system for Koetoi and Wakkanai formations in Horonobe area was compiled based on papers. 2. The cores of HDB-1 were observed mainly from sedimentological view. 3. The facies and compaction property of argillaceous rocks were studied based on physical logs and core analysis data of wells. 4. The structure maps, isochrone maps, isopach maps and restored geological sections were made. The soft wear for sedimentary model to show sedimentary system on a basin scale was developed. This soft wear estimates the facies distribution and hydraulic conductivity of sedimentary rocks on three dimensions scale by numerical simulation. (author)

  7. Parametric study of a turbocompound diesel engine based on an analytical model

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yin, Yong; Zhao, Yanting; Chen, Zhen

    2016-01-01

    Turbocompounding is an important technique to recover waste heat from engine exhaust and reduce CO_2 emission. This paper presents a parametric study of turbocompound diesel engine based on analytical model. An analytical model was developed to investigate the influence of system parameters on the engine fuel consumption. The model is based on thermodynamics knowledge and empirical models, which can consider the impacts of each parameter independently. The effects of turbine efficiency, back pressure, exhaust temperature, pressure ratio and engine speed on the recovery energy, pumping loss and engine fuel reductions were studied. Results show that turbine efficiency, exhaust temperature and back pressure has great influence on the fuel reduction and optimal power turbine (PT) expansion ratio. However, engine operation speed has little impact on the fuel savings obtained by turbocompounding. The interaction mechanism between the PT recovery power and engine pumping loss is presented in the paper. Due to the nonlinear characteristic of turbine power, there is an optimum value of PT expansion ratio to achieve largest power gain. At the end, the fuel saving potential of high performance turbocompound engine and the requirements for it are proposed in the paper. - Highlights: • An analytical model for turbocompound engine is developed and validated. • Parametric study is performed to obtain lowest BSFC and optimal expansion ratio. • The influences of each parameter on the fuel saving potentials are presented. • The impact mechanisms of each parameter on the energy tradeoff are disclosed. • It provides an effective tool to guide the preliminary design of turbocompounding.

  8. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  9. Model-based control of the resistive wall mode in DIII-D: A comparison study

    International Nuclear Information System (INIS)

    Dalessio, J.; Schuster, E.; Humphreys, D.A.; Walker, M.L.; In, Y.; Kim, J.-S.

    2009-01-01

    One of the major non-axisymmetric instabilities under study in the DIII-D tokamak is the resistive wall mode (RWM), a form of plasma kink instability whose growth rate is moderated by the influence of a resistive wall. One of the approaches for RWM stabilization, referred to as magnetic control, uses feedback control to produce magnetic fields opposing the moving field that accompanies the growth of the mode. These fields are generated by coils arranged around the tokamak. One problem with RWM control methods used in present experiments is that they predominantly use simple non-model-based proportional-derivative (PD) controllers requiring substantial derivative gain for stabilization, which implies a large response to noise and perturbations, leading to a requirement for high peak voltages and coil currents, usually leading to actuation saturation and instability. Motivated by this limitation, current efforts in DIII-D include the development of model-based RWM controllers. The General Atomics (GA)/Far-Tech DIII-D RWM model represents the plasma surface as a toroidal current sheet and characterizes the wall using an eigenmode approach. Optimal and robust controllers have been designed exploiting the availability of the RWM dynamic model. The controllers are tested through simulations, and results are compared to present non-model-based PD controllers. This comparison also makes use of the μ structured singular value as a measure of robust stability and performance of the closed-loop system.

  10. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    Science.gov (United States)

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed

  11. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  12. Predicting seizure by modeling synaptic plasticity based on EEG signals - a case study of inherited epilepsy

    Science.gov (United States)

    Zhang, Honghui; Su, Jianzhong; Wang, Qingyun; Liu, Yueming; Good, Levi; Pascual, Juan M.

    2018-03-01

    This paper explores the internal dynamical mechanisms of epileptic seizures through quantitative modeling based on full brain electroencephalogram (EEG) signals. Our goal is to provide seizure prediction and facilitate treatment for epileptic patients. Motivated by an earlier mathematical model with incorporated synaptic plasticity, we studied the nonlinear dynamics of inherited seizures through a differential equation model. First, driven by a set of clinical inherited electroencephalogram data recorded from a patient with diagnosed Glucose Transporter Deficiency, we developed a dynamic seizure model on a system of ordinary differential equations. The model was reduced in complexity after considering and removing redundancy of each EEG channel. Then we verified that the proposed model produces qualitatively relevant behavior which matches the basic experimental observations of inherited seizure, including synchronization index and frequency. Meanwhile, the rationality of the connectivity structure hypothesis in the modeling process was verified. Further, through varying the threshold condition and excitation strength of synaptic plasticity, we elucidated the effect of synaptic plasticity to our seizure model. Results suggest that synaptic plasticity has great effect on the duration of seizure activities, which support the plausibility of therapeutic interventions for seizure control.

  13. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang; Leng, Guoyong; Hou, Beibei

    2017-05-05

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecasting models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.

  14. EFFECTS OF ECONOMIC BEHAVIOUR AND PEIPLE MIGRANTION ON THE EPIDEMIOLOGY OF MALARIA : A MODEL BASED STUDY

    Directory of Open Access Journals (Sweden)

    Sajal Bhattacharya

    2006-11-01

    Full Text Available The objective of the paper is to study the socio economic behaviour of migrant labourers in the context of the control of the diseases like malaria. The paper, therefore, makes a model and survey based study in the city of Kolkata, India to drive home the point that low income of people particularly of the migrant workers can be a major hurdle in the malaria control programme. The paper first looks at the economic behaviour pattern theoretically from neo-classical optimization exercise and the tries to test the theoetical result empirically from primary survey. The theoritical model gives the result that low income people is likely to take less rest and discontinue medical tratment. Since migrant workers of less developed counties are usually low-income people, pur model suggests that migrant workers will have incomplete treatment and their migration even before complete recovery may contribute to spread of the disease. We hage empirically tested the model econometrically by a logit model, and derived the result that migrat workers do take less rest and discontinue treatment becouse of economic compulsion. Thus the data support the result of the theoretical model and refeals a behafiour pattern, conducive to spread of malaria infection. The paper drives some policy prescriptions on the basis of these studies like infurance support, health survillance of migrant population as a part of integrated malaria control programme.

  15. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  16. Parametric Study of Synthetic-Jet-Based Flow Control on a Vertical Tail Model

    Science.gov (United States)

    Monastero, Marianne; Lindstrom, Annika; Beyar, Michael; Amitay, Michael

    2015-11-01

    Separation control over the rudder of the vertical tail of a commercial airplane using synthetic-jet-based flow control can lead to a reduction in tail size, with an associated decrease in drag and increase in fuel savings. A parametric, experimental study was undertaken using an array of finite span synthetic jets to investigate the sensitivity of the enhanced vertical tail side force to jet parameters, such as jet spanwise spacing and jet momentum coefficient. A generic wind tunnel model was designed and fabricated to fundamentally study the effects of the jet parameters at varying rudder deflection and model sideslip angles. Wind tunnel results obtained from pressure measurements and tuft flow visualization in the Rensselaer Polytechnic Subsonic Wind Tunnel show a decrease in separation severity and increase in model performance in comparison to the baseline, non-actuated case. The sensitivity to various parameters will be presented.

  17. Model based feasibility study on bidirectional check valves in wave energy converters

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Pedersen, Henrik C.; Andersen, Torben Ole

    2014-01-01

    Discrete fluid power force systems have been proposed as the primary stage for Wave Energy Converters (WEC’s) when converting ocean waves into electricity, this to improve the overall efficiency of wave energy devices. This paper presents a model based feasibility study of using bidirectional check....../Off and bidirectional check valves. Based on the analysis it is found that the energy production may be slightly improved by using bidirectional check valves as compared to on/off valves, due to a decrease in switching losses. Furthermore a reduction in high flow peaks are realised. The downside being increased...

  18. Applying an expectancy-value model to study motivators for work-task based information seeking

    DEFF Research Database (Denmark)

    Sigaard, Karen Tølbøl; Skov, Mette

    2015-01-01

    on the theory of expectancy-value and on the operationalisation used when the model was first developed. Data for the analysis were collected from a sample of seven informants working as consultants in Danish municipalities. Each participant filled out a questionnaire, kept a log book for a week...... for interpersonal and internal sources increased when the task had high-value motivation or low-expectancy motivation or both. Research limitations/implications: The study is based on a relatively small sample and considers only one motivation theory. This should be addressed in future research along...... with a broadening of the studied group to involve other professions than municipality consultants. Originality/value: Motivational theories from the field of psychology have been used sparsely in studies of information seeking. This study operationalises and verifies such a theory based on a theoretical adaptation...

  19. A heat transfer correlation based on a surface renewal model for molten core concrete interaction study

    International Nuclear Information System (INIS)

    Tourniaire, B. . E-mail bruno.tourniaire@cea.fr

    2006-01-01

    The prediction of heat transfer between corium pool and concrete basemat is of particular significance in the framework of the study of PWR's severe accident. Heat transfer directly governs the ablation velocity of concrete in case of molten core concrete interaction (MCCI) and, consequently, the time delay when the reactor cavity may fail. From a restricted hydrodynamic point of view, this issue is related to heat transfer between a heated bubbling pool and a porous wall with gas injection. Several experimental studies have been performed with simulant materials and many correlations have been provided to address this issue. The comparisons of the results of these correlations with the measurements and their extrapolation to reactor materials show that strong discrepancies between the results of these models are obtained which probably means that some phenomena are not well taken into account. The main purpose of this paper is to present an alternative heat transfer model which was originally developed for chemical engineering applications (bubble columns) by Deckwer. A part of this work is devoted to the presentation of this model, which is based on a surface renewal assumption. Comparison of the results of this model with available experimental data in different systems are presented and discussed. These comparisons clearly show that this model can be used to deal with the particular problem of MCCI. The analyses also lead to enrich the original model by taking into account the thermal resistance of the wall: a new formulation of the Deckwer's correlation is finally proposed

  20. Method for mapping population-based case-control studies: an application using generalized additive models

    Directory of Open Access Journals (Sweden)

    Aschengrau Ann

    2006-06-01

    Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.

  1. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Science.gov (United States)

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  2. Physiologically Based Toxicokinetic Modelling as a Tool to Support Risk Assessment: Three Case Studies

    Directory of Open Access Journals (Sweden)

    Hans Mielke

    2012-01-01

    Full Text Available In this contribution we present three case studies of physiologically based toxicokinetic (PBTK modelling in regulatory risk assessment. (1 Age-dependent lower enzyme expression in the newborn leads to bisphenol A (BPA blood levels which are near the levels of the tolerated daily intake (TDI at the oral exposure as calculated by EFSA. (2 Dermal exposure of BPA by receipts, car park tickets, and so forth, contribute to the exposure towards BPA. However, at the present levels of dermal exposure there is no risk for the adult. (3 Dermal exposure towards coumarin via cosmetic products leads to external exposures of two-fold the TDI. PBTK modeling helped to identify liver peak concentration as the metric for liver toxicity. After dermal exposure of twice the TDI, the liver peak concentration was lower than that present after oral exposure with the TDI dose. In the presented cases, PBTK modeling was useful to reach scientifically sound regulatory decisions.

  3. Physics Based Electrolytic Capacitor Degradation Models for Prognostic Studies under Thermal Overstress

    Science.gov (United States)

    Kulkarni, Chetan S.; Celaya, Jose R.; Goebel, Kai; Biswas, Gautam

    2012-01-01

    Electrolytic capacitors are used in several applications ranging from power supplies on safety critical avionics equipment to power drivers for electro-mechanical actuators. This makes them good candidates for prognostics and health management research. Prognostics provides a way to assess remaining useful life of components or systems based on their current state of health and their anticipated future use and operational conditions. Past experiences show that capacitors tend to degrade and fail faster under high electrical and thermal stress conditions that they are often subjected to during operations. In this work, we study the effects of accelerated aging due to thermal stress on different sets of capacitors under different conditions. Our focus is on deriving first principles degradation models for thermal stress conditions. Data collected from simultaneous experiments are used to validate the desired models. Our overall goal is to derive accurate models of capacitor degradation, and use them to predict performance changes in DC-DC converters.

  4. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be

  5. A real options-based CCS investment evaluation model: Case study of China's power generation sector

    International Nuclear Information System (INIS)

    Zhu, Lei; Fan, Ying

    2011-01-01

    Highlights: → This paper establishes a carbon captures and storage (CCS) investment evaluation model. → The model is based on real options theory and solved by the Least Squares Monte Carlo (LSM) method. → China is taken as a case study to evaluate the effects of regulations on CCS investment. → The findings show that the current investment risk of CCS is high, climate policy having the greatest impact on CCS development. -- Abstract: This paper establishes a carbon capture and storage (CCS) investment evaluation model based on real options theory considering uncertainties from the existing thermal power generating cost, carbon price, thermal power with CCS generating cost, and investment in CCS technology deployment. The model aims to evaluate the value of the cost saving effect and amount of CO 2 emission reduction through investing in newly-built thermal power with CCS technology to replace existing thermal power in a given period from the perspective of power generation enterprises. The model is solved by the Least Squares Monte Carlo (LSM) method. Since the model could be used as a policy analysis tool, China is taken as a case study to evaluate the effects of regulations on CCS investment through scenario analysis. The findings show that the current investment risk of CCS is high, climate policy having the greatest impact on CCS development. Thus, there is an important trade off for policy makers between reducing greenhouse gas emissions and protecting the interests of power generation enterprises. The research presented would be useful for CCS technology evaluation and related policy-making.

  6. Coach simplified structure modeling and optimization study based on the PBM method

    Science.gov (United States)

    Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian

    2016-09-01

    For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the

  7. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  8. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  9. A Numerical Study of Water Loss Rate Distributions in MDCT-based Human Airway Models

    Science.gov (United States)

    Wu, Dan; Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2015-01-01

    Both three-dimensional (3D) and one-dimensional (1D) computational fluid dynamics (CFD) methods are applied to study regional water loss in three multi-detector row computed-tomography (MDCT)-based human airway models at the minute ventilations of 6, 15 and 30 L/min. The overall water losses predicted by both 3D and 1D models in the entire respiratory tract agree with available experimental measurements. However, 3D and 1D models reveal different regional water loss rate distributions due to the 3D secondary flows formed at bifurcations. The secondary flows cause local skewed temperature and humidity distributions on inspiration acting to elevate the local water loss rate; and the secondary flow at the carina tends to distribute more cold air to the lower lobes. As a result, the 3D model predicts that the water loss rate first increases with increasing airway generation, and then decreases as the air approaches saturation, while the 1D model predicts a monotonic decrease of water loss rate with increasing airway generation. Moreover, the 3D (or 1D) model predicts relatively higher water loss rates in lower (or upper) lobes. The regional water loss rate can be related to the non-dimensional wall shear stress (τ*) by the non-dimensional mass transfer coefficient (h0*) as h0* = 1.15 τ*0.272, R = 0.842. PMID:25869455

  10. Evaluation of adamantane hydroxamates as botulinum neurotoxin inhibitors: synthesis, crystallography, modeling, kinetic and cellular based studies.

    Science.gov (United States)

    Šilhár, Peter; Silvaggi, Nicholas R; Pellett, Sabine; Čapková, Kateřina; Johnson, Eric A; Allen, Karen N; Janda, Kim D

    2013-03-01

    Botulinum neurotoxins (BoNTs) are the most lethal biotoxins known to mankind and are responsible for the neuroparalytic disease botulism. Current treatments for botulinum poisoning are all protein based and thus have a limited window of treatment opportunity. Inhibition of the BoNT light chain protease (LC) has emerged as a therapeutic strategy for the treatment of botulism as it may provide an effective post exposure remedy. Using a combination of crystallographic and modeling studies a series of hydroxamates derived from 1-adamantylacetohydroxamic acid (3a) were prepared. From this group of compounds, an improved potency of about 17-fold was observed for two derivatives. Detailed mechanistic studies on these structures revealed a competitive inhibition model, with a K(i)=27 nM, which makes these compounds some of the most potent small molecule, non-peptidic BoNT/A LC inhibitors reported to date. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Aircraft operational reliability—A model-based approach and a case study

    International Nuclear Information System (INIS)

    Tiassou, Kossi; Kanoun, Karama; Kaâniche, Mohamed; Seguin, Christel; Papadopoulos, Chris

    2013-01-01

    The success of an aircraft mission is subject to the fulfillment of some operational requirements before and during each flight. As these requirements depend essentially on the aircraft system components and the mission profile, the effects of failures can be very severe if they are not anticipated. Hence, one should be able to assess the aircraft operational reliability with regard to its missions in order to be able to cope with failures. We address aircraft operational reliability modeling to support maintenance planning during the mission achievement. We develop a modeling approach, based on a meta-model that is used as a basis: (i) to structure the information needed to assess aircraft operational reliability and (ii) to build a stochastic model that can be tuned dynamically, in order to take into account the aircraft system operational state, a mission profile and the maintenance facilities available at the flight stop locations involved in the mission. The aim is to enable operational reliability assessment online. A case study, based on an aircraft subsystem, is considered for illustration using the Stochastic Activity Networks (SANs) formalism

  12. Integrated Agent-Based and Production Cost Modeling Framework for Renewable Energy Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gallo, Giulia

    2015-10-07

    The agent-based framework for renewable energy studies (ARES) is an integrated approach that adds an agent-based model of industry actors to PLEXOS and combines the strengths of the two to overcome their individual shortcomings. It can examine existing and novel wholesale electricity markets under high penetrations of renewables. ARES is demonstrated by studying how increasing levels of wind will impact the operations and the exercise of market power of generation companies that exploit an economic withholding strategy. The analysis is carried out on a test system that represents the Electric Reliability Council of Texas energy-only market in the year 2020. The results more realistically reproduce the operations of an energy market under different and increasing penetrations of wind, and ARES can be extended to address pressing issues in current and future wholesale electricity markets.

  13. Modeling and Sensitivity Study of Consensus Algorithm-Based Distributed Hierarchical Control for DC Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Roldan Perez, Javier

    2016-01-01

    Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes a challen......Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes...... in the communication network, continuous-time methods can be inaccurate for this kind of dynamic study. Therefore, this paper aims at modeling a complete DC MG using a discrete-time approach in order to perform a sensitivity analysis taking into account the effects of the consensus algorithm. To this end......, a generalized modeling method is proposed and the influence of key control parameters, the communication topology and the communication speed are studied in detail. The theoretical results obtained with the proposed model are verified by comparing them with the results obtained with a detailed switching...

  14. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  15. Physiologically-Based Toxicokinetic Modeling of Zearalenone and Its Metabolites: Application to the Jersey Girl Study

    Science.gov (United States)

    Mukherjee, Dwaipayan; Royce, Steven G.; Alexander, Jocelyn A.; Buckley, Brian; Isukapalli, Sastry S.; Bandera, Elisa V.; Zarbl, Helmut; Georgopoulos, Panos G.

    2014-01-01

    Zearalenone (ZEA), a fungal mycotoxin, and its metabolite zeranol (ZAL) are known estrogen agonists in mammals, and are found as contaminants in food. Zeranol, which is more potent than ZEA and comparable in potency to estradiol, is also added as a growth additive in beef in the US and Canada. This article presents the development and application of a Physiologically-Based Toxicokinetic (PBTK) model for ZEA and ZAL and their primary metabolites, zearalenol, zearalanone, and their conjugated glucuronides, for rats and for human subjects. The PBTK modeling study explicitly simulates critical metabolic pathways in the gastrointestinal and hepatic systems. Metabolic events such as dehydrogenation and glucuronidation of the chemicals, which have direct effects on the accumulation and elimination of the toxic compounds, have been quantified. The PBTK model considers urinary and fecal excretion and biliary recirculation and compares the predicted biomarkers of blood, urinary and fecal concentrations with published in vivo measurements in rats and human subjects. Additionally, the toxicokinetic model has been coupled with a novel probabilistic dietary exposure model and applied to the Jersey Girl Study (JGS), which involved measurement of mycoestrogens as urinary biomarkers, in a cohort of young girls in New Jersey, USA. A probabilistic exposure characterization for the study population has been conducted and the predicted urinary concentrations have been compared to measurements considering inter-individual physiological and dietary variability. The in vivo measurements from the JGS fall within the high and low predicted distributions of biomarker values corresponding to dietary exposure estimates calculated by the probabilistic modeling system. The work described here is the first of its kind to present a comprehensive framework developing estimates of potential exposures to mycotoxins and linking them with biologically relevant doses and biomarker measurements

  16. Thermal conductivity model for powdered materials under vacuum based on experimental studies

    Directory of Open Access Journals (Sweden)

    N. Sakatani

    2017-01-01

    Full Text Available The thermal conductivity of powdered media is characteristically very low in vacuum, and is effectively dependent on many parameters of their constituent particles and packing structure. Understanding of the heat transfer mechanism within powder layers in vacuum and theoretical modeling of their thermal conductivity are of great importance for several scientific and engineering problems. In this paper, we report the results of systematic thermal conductivity measurements of powdered media of varied particle size, porosity, and temperature under vacuum using glass beads as a model material. Based on the obtained experimental data, we investigated the heat transfer mechanism in powdered media in detail, and constructed a new theoretical thermal conductivity model for the vacuum condition. This model enables an absolute thermal conductivity to be calculated for a powder with the input of a set of powder parameters including particle size, porosity, temperature, and compressional stress or gravity, and vice versa. Our model is expected to be a competent tool for several scientific and engineering fields of study related to powders, such as the thermal infrared observation of air-less planetary bodies, thermal evolution of planetesimals, and performance of thermal insulators and heat storage powders.

  17. A study of gradient strengthening based on a finite-deformation gradient crystal-plasticity model

    Science.gov (United States)

    Pouriayevali, Habib; Xu, Bai-Xiang

    2017-11-01

    A comprehensive study on a finite-deformation gradient crystal-plasticity model which has been derived based on Gurtin's framework (Int J Plast 24:702-725, 2008) is carried out here. This systematic investigation on the different roles of governing components of the model represents the strength of this framework in the prediction of a wide range of hardening behaviors as well as rate-dependent and scale-variation responses in a single crystal. The model is represented in the reference configuration for the purpose of numerical implementation and then implemented in the FEM software ABAQUS via a user-defined subroutine (UEL). Furthermore, a function of accumulation rates of dislocations is employed and viewed as a measure of formation of short-range interactions. Our simulation results reveal that the dissipative gradient strengthening can be identified as a source of isotropic-hardening behavior, which may represent the effect of irrecoverable work introduced by Gurtin and Ohno (J Mech Phys Solids 59:320-343, 2011). Here, the variation of size dependency at different magnitude of a rate-sensitivity parameter is also discussed. Moreover, an observation of effect of a distinctive feature in the model which explains the effect of distortion of crystal lattice in the reference configuration is reported in this study for the first time. In addition, plastic flows in predefined slip systems and expansion of accumulation of GNDs are distinctly observed in varying scales and under different loading conditions.

  18. Identifiability study of the proteins degradation model, based on ADM1, using simultaneous batch experiments

    DEFF Research Database (Denmark)

    Flotats, X.; Palatsi, J.; Ahring, Birgitte Kiær

    2006-01-01

    are not inhibiting the hydrolysis process. The ADM1 model adequately expressed the consecutive steps of hydrolysis and acidogenesis, with estimated kinetic values corresponding to a fast acidogenesis and slower hydrolysis. The hydrolysis was found to be the rate limiting step of anaerobic degradation. Estimation...... of yield coefficients based on the relative initial slopes of VFA profiles obtained in a simple batch experiment produced satisfactory results. From the identification study, it was concluded that it is possible to determine univocally the related kinetic parameter values for protein degradation...... if the evolution of amino acids is measured in simultaneous batch experiments, with different initial protein and amino acids concentrations....

  19. Remaining useful life estimation based on stochastic deterioration models: A comparative study

    International Nuclear Information System (INIS)

    Le Son, Khanh; Fouladirad, Mitra; Barros, Anne; Levrat, Eric; Iung, Benoît

    2013-01-01

    Prognostic of system lifetime is a basic requirement for condition-based maintenance in many application domains where safety, reliability, and availability are considered of first importance. This paper presents a probabilistic method for prognostic applied to the 2008 PHM Conference Challenge data. A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study. The advantages of our probabilistic approach are pointed out and a comparison with existing results on the same data is made

  20. The implementation of discovery learning model based on lesson study to increase student's achievement in colloid

    Science.gov (United States)

    Suyanti, Retno Dwi; Purba, Deby Monika

    2017-03-01

    The objectives of this research are to get the increase student's achievement on the discovery learning model based on lesson study. Beside of that, this research also conducted to know the cognitive aspect. This research was done in three school that are SMA N 3 Medan. Population is all the students in SMA N 11 Medan which taken by purposive random sampling. The research instruments are achievement test instruments that have been validated. The research data analyzed by statistic using Ms Excell. The result data shows that the student's achievement taught by discovery learning model based on Lesson study higher than the student's achievement taught by direct instructional method. It can be seen from the average of gain and also proved with t-test, the normalized gain in experimental class of SMA N 11 is (0.74±0.12) and control class (0.45±0.12), at significant level α = 0.05, Ha is received and Ho is refused where tcount>ttable in SMA N 11 (9.81>1,66). Then get the improvement cognitive aspect from three of school is C2 where SMA N 11 is 0.84(high). Then the observation sheet result of lesson study from SMA N 11 92 % of student working together while 67% less in active using media.

  1. An agent-based model to study market penetration of plug-in hybrid electric vehicles

    International Nuclear Information System (INIS)

    Eppstein, Margaret J.; Grover, David K.; Marshall, Jeffrey S.; Rizzo, Donna M.

    2011-01-01

    A spatially explicit agent-based vehicle consumer choice model is developed to explore sensitivities and nonlinear interactions between various potential influences on plug-in hybrid vehicle (PHEV) market penetration. The model accounts for spatial and social effects (including threshold effects, homophily, and conformity) and media influences. Preliminary simulations demonstrate how such a model could be used to identify nonlinear interactions among potential leverage points, inform policies affecting PHEV market penetration, and help identify future data collection necessary to more accurately model the system. We examine sensitivity of the model to gasoline prices, to accuracy in estimation of fuel costs, to agent willingness to adopt the PHEV technology, to PHEV purchase price and rebates, to PHEV battery range, and to heuristic values related to gasoline usage. Our simulations indicate that PHEV market penetration could be enhanced significantly by providing consumers with ready estimates of expected lifetime fuel costs associated with different vehicles (e.g., on vehicle stickers), and that increases in gasoline prices could nonlinearly magnify the impact on fleet efficiency. We also infer that a potential synergy from a gasoline tax with proceeds is used to fund research into longer-range lower-cost PHEV batteries. - Highlights: → We model consumer agents to study potential market penetration of PHEVs. → The model accounts for spatial, social, and media effects. → We identify interactions among potential leverage points that could inform policy. → Consumer access to expected lifetime fuel costs may enhance PHEV market penetration. → Increasing PHEV battery range has synergistic effects on fleet efficiency.

  2. Solution processed deposition of electron transport layers on perovskite crystal surface—A modeling based study

    Energy Technology Data Exchange (ETDEWEB)

    Mortuza, S.M.; Taufique, M.F.N.; Banerjee, Soumik, E-mail: soumik.banerjee@wsu.edu

    2017-02-01

    Highlights: • The model determined the surface coverage of solution-processed film on perovskite. • Calculated surface density map provides insight into morphology of the monolayer. • Carbonyl oxygen atom of PCBM strongly attaches to the (110) surface of perovskite. • Uniform distribution of clusters on perovskite surface at lower PCBM concentration. • Deposition rate of PCBM on the surface is very high at initial stage of film growth. - Abstract: The power conversion efficiency (PCE) of planar perovskite solar cells (PSCs) has reached up to ∼20%. However, structural and chemicals defects that lead to hysteresis in the perovskite based thin film pose challenges. Recent work has shown that thin films of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) deposited on the photo absorption layer, using solution processing techniques, minimize surface pin holes and defects thereby increasing the PCE. We developed and employed a multiscale model based on molecular dynamics (MD) and kinetic Monte Carlo (kMC) to establish a relationship between deposition rate and surface coverage on perovskite surface. The MD simulations of PCBMs dispersed in chlorobenzene, sandwiched between (110) perovskite substrates, indicate that PCBMs are deposited through anchoring of the oxygen atom of carbonyl group to the exposed lead (Pb) atom of (110) perovskite surface. Based on rates of distinct deposition events calculated from MD, kMC simulations were run to determine surface coverage at much larger time and length scales than accessible by MD alone. Based on the model, a generic relationship is established between deposition rate of PCBMs and surface coverage on perovskite crystal. The study also provides detailed insights into the morphology of the deposited film.

  3. Solution processed deposition of electron transport layers on perovskite crystal surface—A modeling based study

    International Nuclear Information System (INIS)

    Mortuza, S.M.; Taufique, M.F.N.; Banerjee, Soumik

    2017-01-01

    Highlights: • The model determined the surface coverage of solution-processed film on perovskite. • Calculated surface density map provides insight into morphology of the monolayer. • Carbonyl oxygen atom of PCBM strongly attaches to the (110) surface of perovskite. • Uniform distribution of clusters on perovskite surface at lower PCBM concentration. • Deposition rate of PCBM on the surface is very high at initial stage of film growth. - Abstract: The power conversion efficiency (PCE) of planar perovskite solar cells (PSCs) has reached up to ∼20%. However, structural and chemicals defects that lead to hysteresis in the perovskite based thin film pose challenges. Recent work has shown that thin films of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) deposited on the photo absorption layer, using solution processing techniques, minimize surface pin holes and defects thereby increasing the PCE. We developed and employed a multiscale model based on molecular dynamics (MD) and kinetic Monte Carlo (kMC) to establish a relationship between deposition rate and surface coverage on perovskite surface. The MD simulations of PCBMs dispersed in chlorobenzene, sandwiched between (110) perovskite substrates, indicate that PCBMs are deposited through anchoring of the oxygen atom of carbonyl group to the exposed lead (Pb) atom of (110) perovskite surface. Based on rates of distinct deposition events calculated from MD, kMC simulations were run to determine surface coverage at much larger time and length scales than accessible by MD alone. Based on the model, a generic relationship is established between deposition rate of PCBMs and surface coverage on perovskite crystal. The study also provides detailed insights into the morphology of the deposited film.

  4. A Comparative Study of Marketing Channel Multiagent Stackelberg Model Based on Perfect Rationality and Fairness Preference

    Directory of Open Access Journals (Sweden)

    Kaihong Wang

    2014-01-01

    Full Text Available This paper studies channel consisting of a manufacturer and two retailers. As a basis for comparison, the first, multiagent Stackelberg model has been structured based on perfect rationality. Further, fairness preference theory will be embedded in marketing channel multiagent Stackelberg model, and the results show that if the retailers have a jealous fairness preference, the manufacturer will reduce the wholesale price, retailers will increase the effort level, product sales will be increased, and the total channel utility and manufacturers’ utility will be pareto improvement, but the pareto improvement of retailers’ utility is associated with the interval of jealousy fairness preference coefficient. If the retailers have a sympathetic fairness preference, the manufacturer increases wholesale price, retailers reduce the effort level, and the total channel utility, manufacturer’s utility, and retailers’ utility are less than that of the no fairness preference utility.

  5. In silico modelling and molecular dynamics simulation studies of thiazolidine based PTP1B inhibitors.

    Science.gov (United States)

    Mahapatra, Manoj Kumar; Bera, Krishnendu; Singh, Durg Vijay; Kumar, Rajnish; Kumar, Manoj

    2018-04-01

    Protein tyrosine phosphatase 1B (PTP1B) has been identified as a negative regulator of insulin and leptin signalling pathway; hence, it can be considered as a new therapeutic target of intervention for the treatment of type 2 diabetes. Inhibition of this molecular target takes care of both diabetes and obesity, i.e. diabestiy. In order to get more information on identification and optimization of lead, pharmacophore modelling, atom-based 3D QSAR, docking and molecular dynamics studies were carried out on a set of ligands containing thiazolidine scaffold. A six-point pharmacophore model consisting of three hydrogen bond acceptor (A), one negative ionic (N) and two aromatic rings (R) with discrete geometries as pharmacophoric features were developed for a predictive 3D QSAR model. The probable binding conformation of the ligands within the active site was studied through molecular docking. The molecular interactions and the structural features responsible for PTP1B inhibition and selectivity were further supplemented by molecular dynamics simulation study for a time scale of 30 ns. The present investigation has identified some of the indispensible structural features of thiazolidine analogues which can further be explored to optimize PTP1B inhibitors.

  6. Study of thermal environment in Jingjintang urban agglomeration based on WRF model and Landsat data

    International Nuclear Information System (INIS)

    Huang, Q N; Cao, Z Q; Guo, H D; Xi, X H; Li, X W

    2014-01-01

    In recent decades, unprecedented urban expansion has taken place in developing countries resulting in the emergence of megacities or urban agglomeration. It has been highly concerned by many countries about a variety of urban environmental issues such as greenhouse gas emissions and urban heat island phenomenon associated with urbanization. Generally, thermal environment is monitored by remote sensing satellite data. This method is usually limited by weather and repeated cycle. Another approach is relied on numerical simulation based on models. In the study, these two means are combined to study the thermal environment of Jingjintang urban agglomeration. The high temperature processes of the study area in 2009 and 1990s are simulated by using WRF (the Weather Research and Forecasting Model) coupled with UCM (Urban Canopy Model) and the urban impervious surface estimated from Landsat-5 TM data using support vector machine. Results show that the trend of simulated air temperature (2 meter) is in accord with observed air temperature. Moreover, it indicates the differences of air temperature and Land Surface Temperature caused by the urbanization efficiently. The UHI effect at night is stronger than that in the day. The maximum difference of LST reaches to 8–10°C for new build-up area at night. The method provided in this research can be used to analyze impacts on urban thermal environment caused by urbanization and it also provides means on thermal environment monitoring and prediction which will benefit the coping capacity of extreme event

  7. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  8. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  9. A study of the spreading scheme for viral marketing based on a complex network model

    Science.gov (United States)

    Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong

    2010-02-01

    Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.

  10. FEM BASED PARAMETRIC DESIGN STUDY OF TIRE PROFILE USING DEDICATED CAD MODEL AND TRANSLATION CODE

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2014-12-01

    Full Text Available In this paper a finite element method (FEM based parametric design study of the tire profile shape and belt width is presented. One of the main obstacles that similar studies have faced is how to change the finite element mesh after a modification of the tire geometry is performed. In order to overcome this problem, a new approach is proposed. It implies automatic update of the finite elements mesh, which follows the change of geometric design parameters on a dedicated CAD model. The mesh update is facilitated by an originally developed mapping and translation code. In this way, the performance of a large number of geometrically different tire design variations may be analyzed in a very short time. Although a pilot one, the presented study has also led to the improvement of the existing tire design.

  11. Environmental Sound Perception: Metadescription and Modeling Based on Independent Primary Studies

    Directory of Open Access Journals (Sweden)

    Stephen McAdams

    2010-01-01

    Full Text Available The aim of the study is to transpose and extend to a set of environmental sounds the notion of sound descriptors usually used for musical sounds. Four separate primary studies dealing with interior car sounds, air-conditioning units, car horns, and closing car doors are considered collectively. The corpus formed by these initial stimuli is submitted to new experimental studies and analyses, both for revealing metacategories and for defining more precisely the limits of each of the resulting categories. In a second step, the new structure is modeled: common and specific dimensions within each category are derived from the initial results and new investigations of audio features are performed. Furthermore, an automatic classifier based on two audio descriptors and a multinomial logistic regression procedure is implemented and validated with the corpus.

  12. Adapting an evidence-based model to retain adolescent study participants in longitudinal research.

    Science.gov (United States)

    Davis, Erin; Demby, Hilary; Jenner, Lynne Woodward; Gregory, Alethia; Broussard, Marsha

    2016-02-01

    Maintaining contact with and collecting outcome data from adolescent study participants can present a significant challenge for researchers conducting longitudinal studies. Establishing an organized and effective protocol for participant follow-up is crucial to reduce attrition and maintain high retention rates. This paper describes our methods in using and adapting the evidence-based Engagement, Verification, Maintenance, and Confirmation (EVMC) model to follow up with adolescents 6 and 12 months after implementation of a health program. It extends previous research by focusing on two key modifications to the model: (1) the central role of cell phones and texting to maintain contact with study participants throughout the EVMC process and, (2) use of responsive two-way communication between staff and participants and flexible administration modes and methods in the confirmation phase to ensure that busy teens not only respond to contacts, but also complete data collection. These strategies have resulted in high overall retention rates (87-91%) with adolescent study participants at each follow-up data collection point without the utilization of other, more involved tracking measures. The methods and findings presented may be valuable for other researchers with limited resources planning for or engaged in collecting follow-up outcome data from adolescents enrolled in longitudinal studies. Copyright © 2015. Published by Elsevier Ltd.

  13. Study of the attractor structure of an agent-based sociological model

    Energy Technology Data Exchange (ETDEWEB)

    Timpanaro, Andre M; Prado, Carmen P C, E-mail: timpa@if.usp.br, E-mail: prado@if.usp.br [Instituto de Fisica da Universidade de Sao Paulo, Sao Paulo (Brazil)

    2011-03-01

    The Sznajd model is a sociophysics model that is based in the Potts model, and used for describing opinion propagation in a society. It employs an agent-based approach and interaction rules favouring pairs of agreeing agents. It has been successfully employed in modeling some properties and scale features of both proportional and majority elections (see for instance the works of A. T. Bernardes and R. N. Costa Filho), but its stationary states are always consensus states. In order to explain more complicated behaviours, we have modified the bounded confidence idea (introduced before in other opinion models, like the Deffuant model), with the introduction of prejudices and biases (we called this modification confidence rules), and have adapted it to the discrete Sznajd model. This generalized Sznajd model is able to reproduce almost all of the previous versions of the Sznajd model, by using appropriate choices of parameters. We solved the attractor structure of the resulting model in a mean-field approach and made Monte Carlo simulations in a Barabasi-Albert network. These simulations show great similarities with the mean-field, for the tested cases of 3 and 4 opinions. The dynamical systems approach that we devised allows for a deeper understanding of the potential of the Sznajd model as an opinion propagation model and can be easily extended to other models, like the voter model. Our modification of the bounded confidence rule can also be readily applied to other opinion propagation models.

  14. Case studies of extended model-based flood forecasting: prediction of dike strength and flood impacts

    Science.gov (United States)

    Stuparu, Dana; Bachmann, Daniel; Bogaard, Tom; Twigt, Daniel; Verkade, Jan; de Bruijn, Karin; de Leeuw, Annemargreet

    2017-04-01

    Flood forecasts, warning and emergency response are important components in flood risk management. Most flood forecasting systems use models to translate weather predictions to forecasted discharges or water levels. However, this information is often not sufficient for real time decisions. A sound understanding of the reliability of embankments and flood dynamics is needed to react timely and reduce the negative effects of the flood. Where are the weak points in the dike system? When, how much and where the water will flow? When and where is the greatest impact expected? Model-based flood impact forecasting tries to answer these questions by adding new dimensions to the existing forecasting systems by providing forecasted information about: (a) the dike strength during the event (reliability), (b) the flood extent in case of an overflow or a dike failure (flood spread) and (c) the assets at risk (impacts). This work presents three study-cases in which such a set-up is applied. Special features are highlighted. Forecasting of dike strength. The first study-case focusses on the forecast of dike strength in the Netherlands for the river Rhine branches Waal, Nederrijn and IJssel. A so-called reliability transformation is used to translate the predicted water levels at selected dike sections into failure probabilities during a flood event. The reliability of a dike section is defined by fragility curves - a summary of the dike strength conditional to the water level. The reliability information enhances the emergency management and inspections of embankments. Ensemble forecasting. The second study-case shows the setup of a flood impact forecasting system in Dumfries, Scotland. The existing forecasting system is extended with a 2D flood spreading model in combination with the Delft-FIAT impact model. Ensemble forecasts are used to make use of the uncertainty in the precipitation forecasts, which is useful to quantify the certainty of a forecasted flood event. From global

  15. Implementing three evidence-based program models: early lessons from the Teen Pregnancy Prevention Replication Study.

    Science.gov (United States)

    Kelsey, Meredith; Layzer, Jean

    2014-03-01

    This article describes some of the early implementation challenges faced by nine grantees participating in the Teen Pregnancy Prevention Replication Study and their response to them. The article draws on information collected as part of a comprehensive implementation study. Sources include site and program documents; program officer reports; notes from site investigation, selection and negotiation; ongoing communications with grantees as part of putting the study into place; and semi-structured interviews with program staff. The issues faced by grantees in implementing evidence-based programs designed to prevent teen pregnancy varied by program model. Grantees implementing a classroom-based curriculum faced challenges in delivering the curriculum within the constraints of school schedules and calendars (program length and size of class). Grantees implementing a culturally tailored curriculum faced a series of challenges, including implementing the intervention as part of the regular school curriculum in schools with diverse populations; low attendance when delivered as an after-school program; and resistance on the part of schools to specific curriculum content. The third set of grantees, implementing a program in clinics, faced challenges in identifying and recruiting young women into the program and in retaining young women once they were in the program. The experiences of these grantees reflect some of the complexities that should be carefully considered when choosing to replicate evidence-based programs. The Teen Pregnancy Prevention replication study will provide important context for assessing the effectiveness of some of the more widely replicated evidence-based programs. Copyright © 2014 Society for Adolescent Health and Medicine. All rights reserved.

  16. The impact of hospital-based and community based models of cerebral palsy rehabilitation: a quasi-experimental study.

    Science.gov (United States)

    Dambi, Jermaine M; Jelsma, Jennifer

    2014-12-05

    Cerebral palsy requires appropriate on-going rehabilitation intervention which should effectively meet the needs of both children and parents/care-givers. The provision of effective support is a challenge, particularly in resource constrained settings. A quasi-experimental pragmatic research design was used to compare the impact of two models of rehabilitation service delivery currently offered in Harare, Zimbabwe, an outreach-based programme and the other institution-based. Questionnaires were distributed to 46 caregivers of children with cerebral palsy at baseline and after three months. Twenty children received rehabilitation services in a community setting and 26 received services as outpatients at a central hospital. The Gross Motor Function Measurement was used to assess functional change. The burden of care was measured using the Caregiver Strain Index, satisfaction with physiotherapy was assessed using the modified Medrisk satisfaction with physiotherapy services questionnaire and compliance was measured as the proportion met of the scheduled appointments. Children receiving outreach-based treatment were significantly older than children in the institution-based group. Regression analysis revealed that, once age and level of severity were controlled for, children in the outreach-based treatment group improved their motor function 6% more than children receiving institution-based services. There were no differences detected between the groups with regard to caregiver well-being and 51% of the caregivers reported signs consistent with clinical distress/depression. Most caregivers (83%) expressed that they were overwhelmed by the caregiving role and this increased with the chronicity of care. The financial burden of caregiver was predictive of caregiver strain. Caregivers in the outreach-based group reported greater satisfaction with services and were more compliant (p design interventions to alleviate the burden. The study was a pragmatic, quasi

  17. Study on fusion potential barrier in heavy ion reactions based on the dynamical model

    International Nuclear Information System (INIS)

    Tian Junlong; Wu Xizhen; Li Zhuxia; Wang Ning; Liu Fuhu

    2004-01-01

    Based on an improved quantum molecular dynamics model the static and dynamic potential in the entrance channel of synthesis of superheavy nuclei are studied. The dependence of the static potential (and driving potential) on mass-asymmetry is obtained. From this study authors find out that the mass-symmetric system seems to be difficult to fuse and the fusing system with the largest driving potential could be the optimal choice of the projectile-target combination. By comparing the static potential barrier with the dynamic one authors find that the latter one is lower than former one obviously, and that the dynamical potential barrier is entrance energy dependent. The maximum and minimum of dynamic potential barriers approach to the diabatic (sudden approximation) and the adiabatic static potential barriers, respectively

  18. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  19. An Agent-Based Model for Studying Child Maltreatment and Child Maltreatment Prevention

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard W.

    This paper presents an agent-based model that simulates the dynamics of child maltreatment and child maltreatment prevention. The developed model follows the principles of complex systems science and explicitly models a community and its families with multi-level factors and interconnections across the social ecology. This makes it possible to experiment how different factors and prevention strategies can affect the rate of child maltreatment. We present the background of this work and give an overview of the agent-based model and show some simulation results.

  20. The Research on Informal Learning Model of College Students Based on SNS and Case Study

    Science.gov (United States)

    Lu, Peng; Cong, Xiao; Bi, Fangyan; Zhou, Dongdai

    2017-03-01

    With the rapid development of network technology, informal learning based on online become the main way for college students to learn a variety of subject knowledge. The favor to the SNS community of students and the characteristics of SNS itself provide a good opportunity for the informal learning of college students. This research first analyzes the related research of the informal learning and SNS, next, discusses the characteristics of informal learning and theoretical basis. Then, it proposed an informal learning model of college students based on SNS according to the support role of SNS to the informal learning of students. Finally, according to the theoretical model and the principles proposed in this study, using the Elgg and related tools which is the open source SNS program to achieve the informal learning community. This research is trying to overcome issues such as the lack of social realism, interactivity, resource transfer mode in the current network informal learning communities, so as to provide a new way of informal learning for college students.

  1. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)

    2015-09-15

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)

  2. Investigating the consistency between proxy-based reconstructions and climate models using data assimilation: a mid-Holocene case study

    NARCIS (Netherlands)

    A. Mairesse; H. Goosse; P. Mathiot; H. Wanner; S. Dubinkina (Svetlana)

    2013-01-01

    htmlabstractThe mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of

  3. Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation

    International Nuclear Information System (INIS)

    Sanuki, T.; Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.

    2007-01-01

    We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the π-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The μ + +μ - data show good agreement in the 1∼30 GeV/c range, but a large disagreement above 30 GeV/c. The μ + /μ - ratio shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).

  4. CFD based aerodynamic modeling to study flight dynamics of a flapping wing micro air vehicle

    Science.gov (United States)

    Rege, Alok Ashok

    The demand for small unmanned air vehicles, commonly termed micro air vehicles or MAV's, is rapidly increasing. Driven by applications ranging from civil search-and-rescue missions to military surveillance missions, there is a rising level of interest and investment in better vehicle designs, and miniaturized components are enabling many rapid advances. The need to better understand fundamental aspects of flight for small vehicles has spawned a surge in high quality research in the area of micro air vehicles. These aircraft have a set of constraints which are, in many ways, considerably different from that of traditional aircraft and are often best addressed by a multidisciplinary approach. Fast-response non-linear controls, nano-structures, integrated propulsion and lift mechanisms, highly flexible structures, and low Reynolds aerodynamics are just a few of the important considerations which may be combined in the execution of MAV research. The main objective of this thesis is to derive a consistent nonlinear dynamic model to study the flight dynamics of micro air vehicles with a reasonably accurate representation of aerodynamic forces and moments. The research is divided into two sections. In the first section, derivation of the nonlinear dynamics of flapping wing micro air vehicles is presented. The flapping wing micro air vehicle (MAV) used in this research is modeled as a system of three rigid bodies: a body and two wings. The design is based on an insect called Drosophila Melanogaster, commonly known as fruit-fly. The mass and inertial effects of the wing on the body are neglected for the present work. The nonlinear dynamics is simulated with the aerodynamic data published in the open literature. The flapping frequency is used as the control input. Simulations are run for different cases of wing positions and the chosen parameters are studied for boundedness. Results show a qualitative inconsistency in boundedness for some cases, and demand a better

  5. How Model Can Help Inquiry--A Qualitative Study of Model Based Inquiry Learning (Mobile) in Engineering Education

    Science.gov (United States)

    Gong, Yu

    2017-01-01

    This study investigates how students can use "interactive example models" in inquiry activities to develop their conceptual knowledge about an engineering phenomenon like electromagnetic fields and waves. An interactive model, for example a computational model, could be used to develop and teach principles of dynamic complex systems, and…

  6. Collaborative Model-based Systems Engineering for Cyber-Physical Systems, with a Building Automation Case Study

    DEFF Research Database (Denmark)

    Fitzgerald, John; Gamble, Carl; Payne, Richard

    2016-01-01

    We describe an approach to the model-based engineering of cyber-physical systems that permits the coupling of diverse discrete-event and continuous-time models and their simulators. A case study in the building automation domain demonstrates how such co-models and co-simulation can promote early...

  7. Technological progress and effects of (supra) regional innovation and production collaboration. An agent-based model simulation study.

    NARCIS (Netherlands)

    Vermeulen, B.; Pyka, A.; Serguieva, A.; Maringer, D.; Palade, V.; Almeida, R.J.

    2014-01-01

    We provide a novel technology development model in which economic agents search for transformations to build artifacts. Using this technology development model, we conduct an agent-based model simulation study on the effect of (supra-)regional collaboration in production and innovation on

  8. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  9. Regionalization Study of Satellite based Hydrological Model (SHM) in Hydrologically Homogeneous River Basins of India

    Science.gov (United States)

    Kumari, Babita; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghvendra P.

    2017-04-01

    A new semi-distributed conceptual hydrological model, namely Satellite based Hydrological Model (SHM), has been developed under 'PRACRITI-2' program of Space Application Centre (SAC), Ahmedabad for sustainable water resources management of India by using data from Indian Remote Sensing satellites. Entire India is divided into 5km x 5km grid cells and properties at the center of the cells are assumed to represent the property of the cells. SHM contains five modules namely surface water, forest, snow, groundwater and routing. Two empirical equations (SCS-CN and Hargreaves) and water balance method have been used in the surface water module; the forest module is based on the calculations of water balancing & dynamics of subsurface. 2-D Boussinesq equation is used for groundwater modelling which is solved using implicit finite-difference. The routing module follows a distributed routing approach which requires flow path and network with the key point of travel time estimation. The aim of this study is to evaluate the performance of SHM using regionalization technique which also checks the usefulness of a model in data scarce condition or for ungauged basins. However, homogeneity analysis is pre-requisite to regionalization. Similarity index (Φ) and hierarchical agglomerative cluster analysis are adopted to test the homogeneity in terms of physical attributes of three basins namely Brahmani (39,033 km km^2)), Baitarani (10,982 km km^2)) and Kangsabati (9,660 km km^2)) with respect to Subarnarekha (29,196 km km^2)) basin. The results of both homogeneity analysis show that Brahmani basin is the most homogeneous with respect to Subarnarekha river basin in terms of physical characteristics (land use land cover classes, soiltype and elevation). The calibration and validation of model parameters of Brahmani basin is in progress which are to be transferred into the SHM set up of Subarnarekha basin and results are to be compared with the results of calibrated and validated

  10. Study on Triopoly Dynamic Game Model Based on Different Demand Forecast Methods in the Market

    Directory of Open Access Journals (Sweden)

    Junhai Ma

    2017-01-01

    Full Text Available The impact of inaccurate demand beliefs on dynamics of a Triopoly game is studied. We suppose that all the players make their own estimations on possible demand with errors. A dynamic Triopoly game with such demand belief is set up. Based on this model, existence and local stable region of the equilibriums are investigated by 3D stable regions of Nash equilibrium point. The complex dynamics, such as bifurcation scenarios and route to chaos, are displayed in 2D bifurcation diagrams, in which e1 and α are negatively related to each other. Basins of attraction are investigated and we found that the attraction domain becomes smaller with the increase in price modification speed, which indicates that all the players’ output must be kept within a certain range so as to keep the system stable. Feedback control method is used to keep the system at an equilibrium state.

  11. Study on inventory control model based on the B2C mode in big data environment

    Directory of Open Access Journals (Sweden)

    Zhiping Zhang

    2017-03-01

    Full Text Available The current inventory problem has become the key issue in the enterprise survival and development. In this paper, we take “Taobao” as an example to conduct a detailed study of the inventory of the high conversion rate based on data mining. First, by using a funnel model to predict the conversion of the commodities on the critical path, we capture the factors influencing the consumer decision-making on each key point, and propose corresponding solutions of improving the conversion rate; Second, we use BP neural network algorithm to predict the goods traffic, and then obtain the corresponding weights by the relation analysis and the output of the goods traffic by the input of large data sample goods; Third, we can predict the inventory in accordance with the commodity conversion rate and flow prediction, and amend the predicted results to get accurate and real-time inventory forecast, avoiding the economic loss due to the inaccurate inventory.

  12. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  13. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  14. Experimental study on unsteady open channel flow and bedload transport based on a physical model

    Science.gov (United States)

    Cao, W.

    2015-12-01

    Flow in a nature river are usually unsteady, while nearly all the theories about bedload transport are on the basis of steady, uniform flow, and also with supposed equilibrium state of sediment transport. This is may be one of the main reasons why the bedload transport formulas are notoriously poor accuracy to predict the bedload. The aim of this research is to shed light on the effect of unsteadiness on the bedload transport based on experimental studies. The novel of this study is that the experiments were not carried out in a conventional flume but in a physical model, which are more similar to the actual river. On the other hand, in our experiments, multiple consecutive flood wave were reproduced in the physical model, and all the flow and sediment parameters are based on a large number of data obtained from many of identical flood waves. This method allow us to get more data for one flood, efficiently avoids the uncertainty of bedload rate only for one single flood wave, due to the stochastic fluctuation of the bedload transport. Three different flood waves were selected in the experiments. During each run of experiment, the water level of five different positions along the model were measured by ultrasonic water level gauge, flow velocity at the middle of the channel were measured by two dimensional electromagnetic current meter. Moreover, the bedload transport rate was measured by a unique automatic trap collecting and weighing system at the end of the physical model. The results shows that the celerity of flood wave propagate varies for different flow conditions. The velocity distribution was approximately accord with log-law profile during the entire rising and falling limb of flood. The bedload transport rate show intensity fluctuation in all the experiments, moreover, for different flood waves, the moment when the shear stress reaches its maximum value is not the exact moment when the sediment transport rate reaches its maximum value, which indicates

  15. The evolution of network-based business models illustrated through the case study of an entrepreneurship project

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2014-01-01

    can gain insight into barriers and enablers relating to different types of loose organisations and how to best manage such relationships and interactions Originality/value: This study adds value to the existing literature by reflecting the dynamics created in the interactions between a business model......-based business model that generates additional value for the core business model and for both the partners and the customers. Research limitations/implications: The results should be taken with caution as they are based on the case study of a single network-based business model. Practical implications: Managers......Purpose: Existing frameworks for understanding and analyzing the value configuration and structuring of partnerships in relation such network-based business models are found to be inferior. The purpose of this paper is therefore to broaden our understanding of how business models may change over...

  16. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  17. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  18. Getting water right: A case study in water yield modelling based on precipitation data.

    Science.gov (United States)

    Pessacg, Natalia; Flaherty, Silvia; Brandizi, Laura; Solman, Silvina; Pascual, Miguel

    2015-12-15

    Water yield is a key ecosystem service in river basins and especially in dry regions around the World. In this study we carry out a modelling analysis of water yields in the Chubut River basin, located in one of the driest districts of Patagonia, Argentina. We focus on the uncertainty around precipitation data, a driver of paramount importance for water yield. The objectives of this study are to: i) explore the spatial and numeric differences among six widely used global precipitation datasets for this region, ii) test them against data from independent ground stations, and iii) explore the effects of precipitation data uncertainty on simulations of water yield. The simulations were performed using the ecosystem services model InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) with each of the six different precipitation datasets as input. Our results show marked differences among datasets for the Chubut watershed region, both in the magnitude of precipitations and their spatial arrangement. Five of the precipitation databases overestimate the precipitation over the basin by 50% or more, particularly over the more humid western range. Meanwhile, the remaining dataset (Tropical Rainfall Measuring Mission - TRMM), based on satellite measurements, adjusts well to the observed rainfall in different stations throughout the watershed and provides a better representation of the precipitation gradient characteristic of the rain shadow of the Andes. The observed differences among datasets in the representation of the rainfall gradient translate into large differences in water yield simulations. Errors in precipitation of +30% (-30%) amplify to water yield errors ranging from 50 to 150% (-45 to -60%) in some sub-basins. These results highlight the importance of assessing uncertainties in main input data when quantifying and mapping ecosystem services with biophysical models and cautions about the undisputed use of global environmental datasets. Copyright

  19. Citrate synthase proteins in extremophilic organisms: Studies within a structure-based model

    International Nuclear Information System (INIS)

    Różycki, Bartosz; Cieplak, Marek

    2014-01-01

    We study four citrate synthase homodimeric proteins within a structure-based coarse-grained model. Two of these proteins come from thermophilic bacteria, one from a cryophilic bacterium and one from a mesophilic organism; three are in the closed and two in the open conformations. Even though the proteins belong to the same fold, the model distinguishes the properties of these proteins in a way which is consistent with experiments. For instance, the thermophilic proteins are more stable thermodynamically than their mesophilic and cryophilic homologues, which we observe both in the magnitude of thermal fluctuations near the native state and in the kinetics of thermal unfolding. The level of stability correlates with the average coordination number for amino acid contacts and with the degree of structural compactness. The pattern of positional fluctuations along the sequence in the closed conformation is different than in the open conformation, including within the active site. The modes of correlated and anticorrelated movements of pairs of amino acids forming the active site are very different in the open and closed conformations. Taken together, our results show that the precise location of amino acid contacts in the native structure appears to be a critical element in explaining the similarities and differences in the thermodynamic properties, local flexibility, and collective motions of the different forms of the enzyme

  20. Study on solitary word based on HMM model and Baum-Welch algorithm

    Directory of Open Access Journals (Sweden)

    Junxia CHEN

    Full Text Available This paper introduces the principle of Hidden Markov Model, which is used to describe the Markov process with unknown parameters, is a probability model to describe the statistical properties of the random process. On this basis, designed a solitary word detection experiment based on HMM model, by optimizing the experimental model, Using Baum-Welch algorithm for training the problem of solving the HMM model, HMM model to estimate the parameters of the λ value is found, in this view of mathematics equivalent to other linear prediction coefficient. This experiment in reducing unnecessary HMM training at the same time, reduced the algorithm complexity. In order to test the effectiveness of the Baum-Welch algorithm, The simulation of experimental data, the results show that the algorithm is effective.

  1. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

  2. Experimental and modeling study on charge storage/transfer mechanism of graphene-based supercapacitors

    Science.gov (United States)

    Ban, Shuai; Jing, Xie; Zhou, Hongjun; Zhang, Lei; Zhang, Jiujun

    2014-12-01

    A symmetrical graphene-based supercapacitor is constructed for studying the charge-transfer mechanism within the graphene-based electrodes using both experiment measurements and molecular simulation. The in-house synthesized graphene is characterized by XRD, SEM and BET measurements for morphology and surface area. It is observed that the electric capacity of graphene electrode can be reduced by both high internal resistance and limited mass transfer. Computer modeling is conducted at the molecular level to characterize the diffusion behavior of electrolyte ions to the interior of electrode with emphasis on the unique 2D confinement imposed by graphene layers. Although graphene powder poses a moderate internal surface of 400 m2 g-1, the capacitance performance of graphene electrode can be as good as that of commercial activated carbon which has an overwhelming surface area of 1700 m2 g-1. An explanation to this abnormal correlation is that graphene material has an intrinsic capability of adaptively reorganizing its microporous structure in response to intercalation of ions and immergence of electrolyte solvent. The accessible surface of graphene is believed to be dramatically enlarged for ion adsorption during the charging process of capacitor.

  3. Feasibility Study on a Microwave-Based Sensor for Measuring Hydration Level Using Human Skin Models.

    Science.gov (United States)

    Brendtke, Rico; Wiehl, Michael; Groeber, Florian; Schwarz, Thomas; Walles, Heike; Hansmann, Jan

    2016-01-01

    Tissue dehydration results in three major types of exsiccosis--hyper-, hypo-, or isonatraemia. All three types entail alterations of salt concentrations leading to impaired biochemical processes, and can finally cause severe morbidity. The aim of our study was to demonstrate the feasibility of a microwave-based sensor technology for the non-invasive measurement of the hydration status. Electromagnetic waves at high frequencies interact with molecules, especially water. Hence, if a sample contains free water molecules, this can be detected in a reflected microwave signal. To develop the sensor system, human three-dimensional skin equivalents were instituted as a standardized test platform mimicking reproducible exsiccosis scenarios. Therefore, skin equivalents with a specific hydration and density of matrix components were generated and microwave measurements were performed. Hydration-specific spectra allowed deriving the hydration state of the skin models. A further advantage of the skin equivalents was the characterization of the impact of distinct skin components on the measured signals to investigate mechanisms of signal generation. The results demonstrate the feasibility of a non-invasive microwave-based hydration sensor technology. The sensor bears potential to be integrated in a wearable medical device for personal health monitoring.

  4. Feasibility Study on a Microwave-Based Sensor for Measuring Hydration Level Using Human Skin Models.

    Directory of Open Access Journals (Sweden)

    Rico Brendtke

    Full Text Available Tissue dehydration results in three major types of exsiccosis--hyper-, hypo-, or isonatraemia. All three types entail alterations of salt concentrations leading to impaired biochemical processes, and can finally cause severe morbidity. The aim of our study was to demonstrate the feasibility of a microwave-based sensor technology for the non-invasive measurement of the hydration status. Electromagnetic waves at high frequencies interact with molecules, especially water. Hence, if a sample contains free water molecules, this can be detected in a reflected microwave signal. To develop the sensor system, human three-dimensional skin equivalents were instituted as a standardized test platform mimicking reproducible exsiccosis scenarios. Therefore, skin equivalents with a specific hydration and density of matrix components were generated and microwave measurements were performed. Hydration-specific spectra allowed deriving the hydration state of the skin models. A further advantage of the skin equivalents was the characterization of the impact of distinct skin components on the measured signals to investigate mechanisms of signal generation. The results demonstrate the feasibility of a non-invasive microwave-based hydration sensor technology. The sensor bears potential to be integrated in a wearable medical device for personal health monitoring.

  5. Testing the Community-Based Learning Collaborative (CBLC) implementation model: a study protocol.

    Science.gov (United States)

    Hanson, Rochelle F; Schoenwald, Sonja; Saunders, Benjamin E; Chapman, Jason; Palinkas, Lawrence A; Moreland, Angela D; Dopp, Alex

    2016-01-01

    High rates of youth exposure to violence, either through direct victimization or witnessing, result in significant health/mental health consequences and high associated lifetime costs. Evidence-based treatments (EBTs), such as Trauma-Focused Cognitive Behavioral Therapy (TF-CBT), can prevent and/or reduce these negative effects, yet these treatments are not standard practice for therapists working with children identified by child welfare or mental health systems as needing services. While research indicates that collaboration among child welfare and mental health services sectors improves availability and sustainment of EBTs for children, few implementation strategies designed specifically to promote and sustain inter-professional collaboration (IC) and inter-organizational relationships (IOR) have undergone empirical investigation. A potential candidate for evaluation is the Community-Based Learning Collaborative (CBLC) implementation model, an adaptation of the Learning Collaborative which includes strategies designed to develop and strengthen inter-professional relationships between brokers and providers of mental health services to promote IC and IOR and achieve sustained implementation of EBTs for children within a community. This non-experimental, mixed methods study involves two phases: (1) analysis of existing prospective quantitative and qualitative quality improvement and project evaluation data collected pre and post, weekly, and monthly from 998 participants in one of seven CBLCs conducted as part of a statewide initiative; and (2) Phase 2 collection of new quantitative and qualitative (key informant interviews) data during the funded study period to evaluate changes in relations among IC, IOR, social networks and the penetration and sustainment of TF-CBT in targeted communities. Recruitment for Phase 2 is from the pool of 998 CBLC participants to achieve a targeted enrollment of n = 150. Study aims include: (1) Use existing quality improvement

  6. Cerebellum as a forward but not inverse model in visuomotor adaptation task: a tDCS-based and modeling study.

    Science.gov (United States)

    Yavari, Fatemeh; Mahdavi, Shirin; Towhidkhah, Farzad; Ahmadi-Pajouh, Mohammad-Ali; Ekhtiari, Hamed; Darainy, Mohammad

    2016-04-01

    Despite several pieces of evidence, which suggest that the human brain employs internal models for motor control and learning, the location of these models in the brain is not yet clear. In this study, we used transcranial direct current stimulation (tDCS) to manipulate right cerebellar function, while subjects adapt to a visuomotor task. We investigated the effect of this manipulation on the internal forward and inverse models by measuring two kinds of behavior: generalization of training in one direction to neighboring directions (as a proxy for inverse models) and localization of the hand position after movement without visual feedback (as a proxy for forward model). The experimental results showed no effect of cerebellar tDCS on generalization, but significant effect on localization. These observations support the idea that the cerebellum is a possible brain region for internal forward, but not inverse model formation. We also used a realistic human head model to calculate current density distribution in the brain. The result of this model confirmed the passage of current through the cerebellum. Moreover, to further explain some observed experimental results, we modeled the visuomotor adaptation process with the help of a biologically inspired method known as population coding. The effect of tDCS was also incorporated in the model. The results of this modeling study closely match our experimental data and provide further evidence in line with the idea that tDCS manipulates FM's function in the cerebellum.

  7. Uses of Agent-Based Modeling for Health Communication: the TELL ME Case Study.

    Science.gov (United States)

    Barbrook-Johnson, Peter; Badham, Jennifer; Gilbert, Nigel

    2017-08-01

    Government communication is an important management tool during a public health crisis, but understanding its impact is difficult. Strategies may be adjusted in reaction to developments on the ground and it is challenging to evaluate the impact of communication separately from other crisis management activities. Agent-based modeling is a well-established research tool in social science to respond to similar challenges. However, there have been few such models in public health. We use the example of the TELL ME agent-based model to consider ways in which a non-predictive policy model can assist policy makers. This model concerns individuals' protective behaviors in response to an epidemic, and the communication that influences such behavior. Drawing on findings from stakeholder workshops and the results of the model itself, we suggest such a model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection. We also plot a path for development of similar models that could assist with communication planning for epidemics.

  8. A Three-Pulse Release Tablet for Amoxicillin: Preparation, Pharmacokinetic Study and Physiologically Based Pharmacokinetic Modeling.

    Science.gov (United States)

    Li, Jin; Chai, Hongyu; Li, Yang; Chai, Xuyu; Zhao, Yan; Zhao, Yunfan; Tao, Tao; Xiang, Xiaoqiang

    2016-01-01

    Amoxicillin is a commonly used antibiotic which has a short half-life in human. The frequent administration of amoxicillin is often required to keep the plasma drug level in an effective range. The short dosing interval of amoxicillin could also cause some side effects and drug resistance, and impair its therapeutic efficacy and patients' compliance. Therefore, a three-pulse release tablet of amoxicillin is desired to generate sustained release in vivo, and thus to avoid the above mentioned disadvantages. The pulsatile release tablet consists of three pulsatile components: one immediate-release granule and two delayed release pellets, all containing amoxicillin. The preparation of a pulsatile release tablet of amoxicillin mainly includes wet granulation craft, extrusion/spheronization craft, pellet coating craft, mixing craft, tablet compression craft and film coating craft. Box-Behnken design, Scanning Electron Microscope and in vitro drug release test were used to help the optimization of formulations. A crossover pharmacokinetic study was performed to compare the pharmacokinetic profile of our in-house pulsatile tablet with that of commercial immediate release tablet. The pharmacokinetic profile of this pulse formulation was simulated by physiologically based pharmacokinetic (PBPK) model with the help of Simcyp®. Single factor experiments identify four important factors of the formulation, namely, coating weight of Eudragit L30 D-55 (X1), coating weight of AQOAT AS-HF (X2), the extrusion screen aperture (X3) and compression forces (X4). The interrelations of the four factors were uncovered by a Box-Behnken design to help to determine the optimal formulation. The immediate-release granule, two delayed release pellets, together with other excipients, namely, Avicel PH 102, colloidal silicon dioxide, polyplasdone and magnesium stearate were mixed, and compressed into tablets, which was subsequently coated with Opadry® film to produce pulsatile tablet of

  9. A study on model fidelity for model predictive control-based obstacle avoidance in high-speed autonomous ground vehicles

    Science.gov (United States)

    Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga

    2016-11-01

    This paper investigates the level of model fidelity needed in order for a model predictive control (MPC)-based obstacle avoidance algorithm to be able to safely and quickly avoid obstacles even when the vehicle is close to its dynamic limits. The context of this work is large autonomous ground vehicles that manoeuvre at high speed within unknown, unstructured, flat environments and have significant vehicle dynamics-related constraints. Five different representations of vehicle dynamics models are considered: four variations of the two degrees-of-freedom (DoF) representation as lower fidelity models and a fourteen DoF representation with combined-slip Magic Formula tyre model as a higher fidelity model. It is concluded that the two DoF representation that accounts for tyre nonlinearities and longitudinal load transfer is necessary for the MPC-based obstacle avoidance algorithm in order to operate the vehicle at its limits within an environment that includes large obstacles. For less challenging environments, however, the two DoF representation with linear tyre model and constant axle loads is sufficient.

  10. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  11. Study on the Cooperative E-commerce Model between Enterprises based on the Value Chain

    Institute of Scientific and Technical Information of China (English)

    XU Jun[1,2; LIU Xiaoxing[1

    2015-01-01

    The real e-commerce between enterprises is based on the internal departments of enterprises and the cooperative interaction between enterprise and its partners. In this paper, on the basis of the theory of value chain, 11 cooperative e-commerce models between enterprises have been classified according to the activities of the cooperation between enterprises, and then every cooperative e-commerce model between enterprises is discussed. In practice, cooperative e-commerce between enterprises can be a combination of one or more e-commerce models between enterprises.

  12. Biologically based neural circuit modelling for the study of fear learning and extinction

    Science.gov (United States)

    Nair, Satish S.; Paré, Denis; Vicentic, Aleksandra

    2016-11-01

    The neuronal systems that promote protective defensive behaviours have been studied extensively using Pavlovian conditioning. In this paradigm, an initially neutral-conditioned stimulus is paired with an aversive unconditioned stimulus leading the subjects to display behavioural signs of fear. Decades of research into the neural bases of this simple behavioural paradigm uncovered that the amygdala, a complex structure comprised of several interconnected nuclei, is an essential part of the neural circuits required for the acquisition, consolidation and expression of fear memory. However, emerging evidence from the confluence of electrophysiological, tract tracing, imaging, molecular, optogenetic and chemogenetic methodologies, reveals that fear learning is mediated by multiple connections between several amygdala nuclei and their distributed targets, dynamical changes in plasticity in local circuit elements as well as neuromodulatory mechanisms that promote synaptic plasticity. To uncover these complex relations and analyse multi-modal data sets acquired from these studies, we argue that biologically realistic computational modelling, in conjunction with experiments, offers an opportunity to advance our understanding of the neural circuit mechanisms of fear learning and to address how their dysfunction may lead to maladaptive fear responses in mental disorders.

  13. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    Science.gov (United States)

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  14. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Tung, Chang-Shung; Faeder, James R; Lopez, Carlos F; Hlavacek, William S

    2014-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and posttranslational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). © 2013 Wiley Periodicals, Inc.

  15. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from the co...

  16. Development of an Adolescent Alcohol Misuse Intervention Based on the Prototype Willingness Model: A Delphi Study

    Science.gov (United States)

    Davies, Emma; Martin, Jilly; Foxcroft, David

    2016-01-01

    Purpose: The purpose of this paper is to report on the use of the Delphi method to gain expert feedback on the identification of behaviour change techniques (BCTs) and development of a novel intervention to reduce adolescent alcohol misuse, based on the Prototype Willingness Model (PWM) of health risk behaviour. Design/methodology/approach: Four…

  17. Neuromyelitis optica study model based on chronic infusion of autoantibodies in rat cerebrospinal fluid.

    Science.gov (United States)

    Marignier, R; Ruiz, A; Cavagna, S; Nicole, A; Watrin, C; Touret, M; Parrot, S; Malleret, G; Peyron, C; Benetollo, C; Auvergnon, N; Vukusic, S; Giraudon, P

    2016-05-18

    Devic's neuromyelitis optica (NMO) is an autoimmune astrocytopathy, associated with central nervous system inflammation, demyelination, and neuronal injury. Several studies confirmed that autoantibodies directed against aquaporin-4 (AQP4-IgG) are relevant in the pathogenesis of NMO, mainly through complement-dependent toxicity leading to astrocyte death. However, the effect of the autoantibody per se and the exact role of intrathecal AQP4-IgG are still controversial. To explore the intrinsic effect of intrathecal AQP4-IgG, independent from additional inflammatory effector mechanisms, and to evaluate its clinical impact, we developed a new animal model, based on a prolonged infusion of purified immunoglobulins from NMO patient (IgG(AQP4+), NMO-rat) and healthy individual as control (Control-rat) in the cerebrospinal fluid (CSF) of live rats. We showed that CSF infusion of purified immunoglobulins led to diffusion in the brain, spinal cord, and optic nerves, the targeted structures in NMO. This was associated with astrocyte alteration in NMO-rats characterized by loss of aquaporin-4 expression in the spinal cord and the optic nerves compared to the Control-rats (p = 0.001 and p = 0.02, respectively). In addition, glutamate uptake tested on vigil rats was dramatically reduced in NMO-rats (p = 0.001) suggesting that astrocytopathy occurred in response to AQP4-IgG diffusion. In parallel, myelin was altered, as shown by the decrease of myelin basic protein staining by up to 46 and 22 % in the gray and white matter of the NMO-rats spinal cord, respectively (p = 0.03). Loss of neurofilament positive axons in NMO-rats (p = 0.003) revealed alteration of axonal integrity. Then, we investigated the clinical consequences of such alterations on the motor behavior of the NMO-rats. In a rotarod test, NMO-rats performance was lower compared to the controls (p = 0.0182). AQP4 expression, and myelin and axonal integrity were preserved in AQP4-Ig

  18. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  19. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  20. Using the Activity-based Anorexia Rodent Model to Study the Neurobiological Basis of Anorexia Nervosa.

    Science.gov (United States)

    Chowdhury, Tara Gunkali; Chen, Yi-Wen; Aoki, Chiye

    2015-10-22

    Anorexia nervosa (AN) is a psychiatric illness characterized by excessively restricted caloric intake and abnormally high levels of physical activity. A challenging illness to treat, due to the lack of understanding of the underlying neurobiology, AN has the highest mortality rate among psychiatric illnesses. To address this need, neuroscientists are using an animal model to study how neural circuits may contribute toward vulnerability to AN and may be affected by AN. Activity-based anorexia (ABA) is a bio-behavioral phenomenon described in rodents that models the key symptoms of anorexia nervosa. When rodents with free access to voluntary exercise on a running wheel experience food restriction, they become hyperactive - running more than animals with free access to food. Here, we describe the procedures by which ABA is induced in adolescent female C57BL/6 mice. On postnatal day 36 (P36), the animal is housed with access to voluntary exercise on a running wheel. After 4 days of acclimation to the running wheel, on P40, all food is removed from the cage. For the next 3 days, food is returned to the cage (allowing animals free food access) for 2 hr daily. After the fourth day of food restriction, free access to food is returned and the running wheel is removed from the cage to allow the animals to recover. Continuous multi-day analysis of running wheel activity shows that mice become hyperactive within 24 hr following the onset of food restriction. The mice run even during the limited time during which they have access to food. Additionally, the circadian pattern of wheel running becomes disrupted by the experience of food restriction. We have been able to correlate neurobiological changes with various aspects of the animals' wheel running behavior to implicate particular brain regions and neurochemical changes with resilience and vulnerability to food-restriction induced hyperactivity.

  1. Peristomal Skin Complications Are Common, Expensive, and Difficult to Manage: A Population Based Cost Modeling Study

    Science.gov (United States)

    Meisner, Søren; Lehur, Paul-Antoine; Moran, Brendan; Martins, Lina; Jemec, Gregor Borut Ernst

    2012-01-01

    Background Peristomal skin complications (PSCs) are the most common post-operative complications following creation of a stoma. Living with a stoma is a challenge, not only for the patient and their carers, but also for society as a whole. Due to methodological problems of PSC assessment, the associated health-economic burden of medium to longterm complications has been poorly described. Aim The aim of the present study was to create a model to estimate treatment costs of PSCs using the standardized assessment Ostomy Skin Tool as a reference. The resultant model was applied to a real-life global data set of stoma patients (n = 3017) to determine the prevalence and financial burden of PSCs. Methods Eleven experienced stoma care nurses were interviewed to get a global understanding of a treatment algorithm that formed the basis of the cost analysis. The estimated costs were based on a seven week treatment period. PSC costs were estimated for five underlying diagnostic categories and three levels of severity. The estimated treatment costs of severe cases of PSCs were increased 2–5 fold for the different diagnostic categories of PSCs compared with mild cases. French unit costs were applied to the global data set. Results The estimated total average cost for a seven week treatment period (including appliances and accessories) was 263€ for those with PSCs (n = 1742) compared to 215€ for those without PSCs (n = 1172). A co-variance analysis showed that leakage level had a significant impact on PSC cost from ‘rarely/never’ to ‘always/often’ p<0.00001 and from ‘rarely/never’ to ‘sometimes’ p = 0.0115. Conclusion PSCs are common and troublesome and the consequences are substantial, both for the patient and from a health economic viewpoint. PSCs should be diagnosed and treated at an early stage to prevent long term, debilitating and expensive complications. PMID:22679479

  2. Peristomal skin complications are common, expensive, and difficult to manage: a population based cost modeling study.

    Directory of Open Access Journals (Sweden)

    Søren Meisner

    Full Text Available BACKGROUND: Peristomal skin complications (PSCs are the most common post-operative complications following creation of a stoma. Living with a stoma is a challenge, not only for the patient and their carers, but also for society as a whole. Due to methodological problems of PSC assessment, the associated health-economic burden of medium to longterm complications has been poorly described. AIM: The aim of the present study was to create a model to estimate treatment costs of PSCs using the standardized assessment Ostomy Skin Tool as a reference. The resultant model was applied to a real-life global data set of stoma patients (n = 3017 to determine the prevalence and financial burden of PSCs. METHODS: Eleven experienced stoma care nurses were interviewed to get a global understanding of a treatment algorithm that formed the basis of the cost analysis. The estimated costs were based on a seven week treatment period. PSC costs were estimated for five underlying diagnostic categories and three levels of severity. The estimated treatment costs of severe cases of PSCs were increased 2-5 fold for the different diagnostic categories of PSCs compared with mild cases. French unit costs were applied to the global data set. RESULTS: The estimated total average cost for a seven week treatment period (including appliances and accessories was 263€ for those with PSCs (n = 1742 compared to 215€ for those without PSCs (n = 1172. A co-variance analysis showed that leakage level had a significant impact on PSC cost from 'rarely/never' to 'always/often' p<0.00001 and from 'rarely/never' to 'sometimes' p = 0.0115. CONCLUSION: PSCs are common and troublesome and the consequences are substantial, both for the patient and from a health economic viewpoint. PSCs should be diagnosed and treated at an early stage to prevent long term, debilitating and expensive complications.

  3. Burnout in Medical Residents: A Study Based on the Job Demands-Resources Model

    Directory of Open Access Journals (Sweden)

    Panagiotis Zis

    2014-01-01

    Full Text Available Purpose. Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job. The purpose of our cross-sectional study was to estimate the burnout rates among medical residents in the largest Greek hospital in 2012 and identify factors associated with it, based on the job demands-resources model (JD-R. Method. Job demands were examined via a 17-item questionnaire assessing 4 characteristics (emotional demands, intellectual demands, workload, and home-work demands’ interface and job resources were measured via a 14-item questionnaire assessing 4 characteristics (autonomy, opportunities for professional development, support from colleagues, and supervisor’s support. The Maslach Burnout Inventory (MBI was used to measure burnout. Results. Of the 290 eligible residents, 90.7% responded. In total 14.4% of the residents were found to experience burnout. Multiple logistic regression analysis revealed that each increased point in the JD-R questionnaire score regarding home-work interface was associated with an increase in the odds of burnout by 25.5%. Conversely, each increased point for autonomy, opportunities in professional development, and each extra resident per specialist were associated with a decrease in the odds of burnout by 37.1%, 39.4%, and 59.0%, respectively. Conclusions. Burnout among medical residents is associated with home-work interface, autonomy, professional development, and resident to specialist ratio.

  4. Burnout in medical residents: a study based on the job demands-resources model.

    Science.gov (United States)

    Zis, Panagiotis; Anagnostopoulos, Fotios; Sykioti, Panagiota

    2014-01-01

    Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job. The purpose of our cross-sectional study was to estimate the burnout rates among medical residents in the largest Greek hospital in 2012 and identify factors associated with it, based on the job demands-resources model (JD-R). Job demands were examined via a 17-item questionnaire assessing 4 characteristics (emotional demands, intellectual demands, workload, and home-work demands' interface) and job resources were measured via a 14-item questionnaire assessing 4 characteristics (autonomy, opportunities for professional development, support from colleagues, and supervisor's support). The Maslach Burnout Inventory (MBI) was used to measure burnout. Of the 290 eligible residents, 90.7% responded. In total 14.4% of the residents were found to experience burnout. Multiple logistic regression analysis revealed that each increased point in the JD-R questionnaire score regarding home-work interface was associated with an increase in the odds of burnout by 25.5%. Conversely, each increased point for autonomy, opportunities in professional development, and each extra resident per specialist were associated with a decrease in the odds of burnout by 37.1%, 39.4%, and 59.0%, respectively. Burnout among medical residents is associated with home-work interface, autonomy, professional development, and resident to specialist ratio.

  5. Geostatistical modelling of carbon monoxide levels in Khartoum State (Sudan) - GIS pilot based study

    Energy Technology Data Exchange (ETDEWEB)

    Alhuseen, A [Comenius University in Bratislava, Faculty of Natural Sciences, Dept. of Landscape Ecology, 84215 Bratislava (Slovakia); Madani, M [Ministry of Environment and Physical Development, 1111 Khartoum (Sudan)

    2012-04-25

    The objective of this study is to develop a digital GIS model; that can evaluate, predict and visualize carbon monoxide (CO) levels in Khartoum state. To achieve this aim, sample data had been collected, processed and managed to generate a dynamic GIS model of carbon monoxide levels in the study area. Parametric data collected from the field and analysis carried throughout this study show that (CO) emissions were lower than the allowable ambient air quality standards released by National Environment Protection Council (NEPC-USA) for 1998. However, this pilot study has found emissions of (CO) in Omdurman city were the highest. This pilot study shows that GIS and geostatistical modeling can be used as a powerful tool to produce maps of exposure. (authors)

  6. Model study on transesterification of soybean oil to biodiesel with methanol using solid base catalyst.

    Science.gov (United States)

    Liu, Xuejun; Piao, Xianglan; Wang, Yujun; Zhu, Shenlin

    2010-03-25

    Modeling of the transesterification of vegetable oils to biodiesel using a solid base as a catalyst is very important because the mutual solubilities of oil and methanol will increase with the increasing biodiesel yield. The heterogeneous liquid-liquid-solid reaction system would become a liquid-solid system when the biodiesel reaches a certain content. In this work, we adopted a two-film theory and a steady state approximation assumption, then established a heterogeneous liquid-liquid-solid model in the first stage. After the diffusion coefficients on the liquid-liquid interface and the liquid-solid interface were calculated on the basis of the properties of the system, the theoretical value of biodiesel productivity changing with time was obtained. The predicted values were very near the experimental data, which indicated that the proposed models were suitable for the transesterification of soybean oil to biodiesel when solid bases were used as catalysts. Meanwhile, the model indicated that the transesterification reaction was controlled by both mass transfer and reaction. The total resistance will decrease with the increase in biodiesel yield in the liquid-liquid-solid stage. The solid base catalyst exhibited an activation energy range of 9-20 kcal/mol, which was consistent with the reported activation energy range of homogeneous catalysts.

  7. Risk-based modelling of surface water quality: a case study of the Charles River, Massachusetts

    Science.gov (United States)

    McIntyre, Neil R.; Wagener, Thorsten; Wheater, Howard S.; Chapra, Steven C.

    2003-04-01

    A model of phytoplankton, dissolved oxygen and nutrients is presented and applied to the Charles River, Massachusetts within a framework of Monte Carlo simulation. The model parameters are conditioned using data from eight sampling stations along a 40 km stretch of the Charles River, during a (supposed) steady-state period in the summer of 1996, and the conditioned model is evaluated using data from later in the same year. Regional multi-objective sensitivity analysis is used to identify the parameters and pollution sources most affecting the various model outputs under the conditions observed during that summer. The effects of Monte Carlo sampling error are included in this analysis, and the observations which have least contributed to model conditioning are indicated. It is shown that the sensitivity analysis can be used to speculate about the factors responsible for undesirable levels of eutrophication, and to speculate about the risk of failure of nutrient reduction interventions at a number of strategic control sections. The analysis indicates that phosphorus stripping at the CRPCD wastewater treatment plant on the Charles River would be a high-risk intervention, especially for controlling eutrophication at the control sections further downstream. However, as the risk reflects the perceived scope for model error, it can only be recommended that more resources are invested in data collection and model evaluation. Furthermore, as the risk is based solely on water quality criteria, rather than broader environmental and economic objectives, the results need to be supported by detailed and extensive knowledge of the Charles River problem.

  8. Towards A Model-based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    Directory of Open Access Journals (Sweden)

    Gautam Biswas

    2012-12-01

    Full Text Available This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter approach in conjunction with an empirical state-based degradation model to predict the degradation of capacitor parameters through the life of the capacitor. Electrolytic capacitors are important components of systems that range from power supplies on critical avion- ics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their critical role in the system, they are good candidates for component level prognostics and health management. Prognostics provides a way to assess remain- ing useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. This paper proposes and empirical degradation model and discusses experimental results for an accelerated aging test performed on a set of identical capacitors subjected to electrical stress. The data forms the basis for developing the Kalman-filter based remaining life prediction algorithm.

  9. Discovering the Power of Individual-Based Modelling in Teaching and Learning: The Study of a Predator-Prey System

    Science.gov (United States)

    Ginovart, Marta

    2014-08-01

    The general aim is to promote the use of individual-based models (biological agent-based models) in teaching and learning contexts in life sciences and to make their progressive incorporation into academic curricula easier, complementing other existing modelling strategies more frequently used in the classroom. Modelling activities for the study of a predator-prey system for a mathematics classroom in the first year of an undergraduate program in biosystems engineering have been designed and implemented. These activities were designed to put two modelling approaches side by side, an individual-based model and a set of ordinary differential equations. In order to organize and display this, a system with wolves and sheep in a confined domain was considered and studied. With the teaching material elaborated and a computer to perform the numerical resolutions involved and the corresponding individual-based simulations, the students answered questions and completed exercises to achieve the learning goals set. Students' responses regarding the modelling of biological systems and these two distinct methodologies applied to the study of a predator-prey system were collected via questionnaires, open-ended queries and face-to-face dialogues. Taking into account the positive responses of the students when they were doing these activities, it was clear that using a discrete individual-based model to deal with a predator-prey system jointly with a set of ordinary differential equations enriches the understanding of the modelling process, adds new insights and opens novel perspectives of what can be done with computational models versus other models. The complementary views given by the two modelling approaches were very well assessed by students.

  10. Effectiveness of Facebook Based Learning to Enhance Creativity among Islamic Studies Students by Employing Isman Instructional Design Model

    Science.gov (United States)

    Alias, Norlidah; Siraj, Saedah; Daud, Mohd Khairul Azman Md; Hussin, Zaharah

    2013-01-01

    The study examines the effectiveness of Facebook based learning to enhance creativity among Islamic Studies students in the secondary educational setting in Malaysia. It describes the design process by employing the Isman Instructional Design Model. A quantitative study was carried out using experimental method and background survey. The…

  11. Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.

    Science.gov (United States)

    Ying, Xiaoguo; Lin, Han; Hui, Guohua

    2015-01-01

    Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.

  12. A Study of Differentiated Instruction Based on the SIOP Model in Georgia Classrooms

    Science.gov (United States)

    Tomlinson, Sherry Marie

    2013-01-01

    This mixed methods study investigated the teachers' concerns of the sheltered instruction observation protocol (SIOP) model (Echevarria, Short and Vogt, 2008) as a means to differentiate instruction for LEP students in public school classrooms. This study took place in one central Georgia school district with a sample of 16 teachers who…

  13. Human-centered modeling in human reliability analysis: some trends based on case studies

    International Nuclear Information System (INIS)

    Mosneron-Dupin, F.; Reer, B.; Heslinga, G.; Straeter, O.; Gerdes, V.; Saliou, G.; Ullwer, W.

    1997-01-01

    As an informal working group of researchers from France, Germany and The Netherlands created in 1993, the EARTH association is investigating significant subjects in the field of human reliability analysis (HRA). Our initial review of cases from nuclear operating experience showed that decision-based unrequired actions (DUA) contribute to risk significantly on the one hand. On the other hand, our evaluation of current HRA methods showed that these methods do not cover such actions adequately. Especially, practice-oriented guidelines for their predictive identification are lacking. We assumed that a basic cause for such difficulties was that these methods actually use a limited representation of the stimulus-organism-response (SOR) paradigm. We proposed a human-centered model, which better highlights the active role of the operators and the importance of their culture, attitudes and goals. This orientation was encouraged by our review of current HRA research activities. We therefore decided to envisage progress by identifying cognitive tendencies in the context of operating and simulator experience. For this purpose, advanced approaches for retrospective event analysis were discussed. Some orientations for improvements were proposed. By analyzing cases, various cognitive tendencies were identified, together with useful information about their context. Some of them match psychological findings already published in the literature, some of them are not covered adequately by the literature that we reviewed. Finally, this exploratory study shows that contextual and case-illustrated findings about cognitive tendencies provide useful help for the predictive identification of DUA in HRA. More research should be carried out to complement our findings and elaborate more detailed and systematic guidelines for using them in HRA studies

  14. Integrating Autism Care through a School-Based Intervention Model: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Katherine Dang

    2017-10-01

    Full Text Available The purpose of this pilot study is to determine the feasibility of monitoring the progress of children with an autism spectrum disorder (ASD both in school and at home to promote a school-based integrated care model between parents, teachers, and medical providers. This is a prospective cohort study. To monitor progress, outcome measures were administered via an online platform developed for caregivers and teachers of children (n = 30 attending a school specializing in neurodevelopmental disorders and using an integrated medical and education program. Longitudinal analysis showed improvements in a novel scale, the Teacher Autism Progress Scale (TAPS, which was designed to measure key autism-related gains in a school environment (2.1-point improvement, p = 0.004, ES = 0.324. The TAPS showed a strong and statistically significant correlation, with improvement in aberrant behavior (r = −0.50; p = 0.008 and social responsiveness (r = −0.70; p < 0.001. The results also showed non-statistically significant improvements in aberrant behavior, social responsiveness, and quality of life over time at both school and home. To assess feasibility of ongoing progress measurement, we assessed missing data, which showed caregivers were more likely to miss surveys during summer. Results demonstrate the value and feasibility of online, longitudinal data collection in school to assist with individualized education planning and collaborative care for children with ASD. Lessons learned in this pilot will support school outcomes researchers in developing more efficacious, collaborative treatment plans between clinicians, caregivers, and teachers.

  15. Unrecorded alcohol use: a global modelling study based on nominal group assessments and survey data.

    Science.gov (United States)

    Probst, Charlotte; Manthey, Jakob; Merey, Aaron; Rylett, Margaret; Rehm, Jürgen

    2018-01-27

    Alcohol use is among the most important risk factors for burden of disease globally. An estimated quarter of the total alcohol consumed globally is unrecorded. However, due partly to the challenges associated with its assessment, evidence concerning the magnitude of unrecorded alcohol use is sparse. This study estimated country-specific proportions of unrecorded alcohol used in 2015. A statistical model was developed for data prediction using data on the country-specific proportion of unrecorded alcohol use from nominal group expert assessments and secondary, nationally representative survey data and country-level covariates. Estimates were calculated for the country level, for four income groups and globally. A total of 129 participants from 49 countries were included in the nominal group expert assessments. The survey data comprised 66 538 participants from 16 countries. Experts completed a standardized questionnaire assessing the country-specific proportion of unrecorded alcohol. In the national surveys, the number of standard drinks of total and unrecorded alcohol use was assessed for the past 7 days. Based on predictions for 167 countries, a population-weighted average of 27.9% [95% confidence interval (CI) = 10.4-44.9%] of the total alcohol consumed in 2015 was unrecorded. The proportion of unrecorded alcohol was lower in high (9.4%, 95% CI = 2.4-16.4%) and upper middle-income countries (18.3%, 95% CI = 9.0-27.6%) and higher in low (43.1%, 95% CI = 26.5-59.7%) and lower middle-income countries (54.4%, 95% CI = 38.1-70.8%). This corresponded to 0.9 (high-income), 1.2 (upper middle-income), 3.2 (lower middle-income) and 1.8 (low-income) litres of unrecorded alcohol per capita. A new method for modelling the country-level proportion of unrecorded alcohol use globally showed strong variation among geographical regions and income groups. Lower-income countries were associated with a higher proportion of unrecorded alcohol than higher-income countries

  16. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  17. Challenges encountered when expanding activated sludge models: a case study based on N2O production

    DEFF Research Database (Denmark)

    Snip, Laura; Boiocchi, Riccardo; Flores Alsina, Xavier

    2014-01-01

    (WWTPs). As a consequence, these experiments might not be representative for full-scale performance, and unexpected behaviour may be observed when simulating WWTP models using the derived process equations. In this paper we want to highlight problems encountered using a simplified case study: a modified......: problems related to the overall mathematical model structure and problems related to the published set of parameter values. The paper describes the model implementation incompatibilities, the variability in parameter values and the difficulties of reaching similar conditions when simulating a full...

  18. Biomimetic agent based modelling using male Frog calling behaviour as a case study

    DEFF Research Database (Denmark)

    Jørgensen, Søren V.; Demazeau, Yves; Christensen-Dalsgaard, Jakob

    2014-01-01

    by individuals to generate their observed population behaviour. A number of existing agent-modelling frameworks are considered, but none have the ability to handle large numbers of time-dependent event-generating agents; hence the construction of a new tool, RANA. The calling behaviour of the Puerto Rican Tree...... Frog, E. coqui, is implemented as a case study for the presentation and discussion of the tool, and results from this model are presented. RANA, in its present stage of development, is shown to be able to handle the problem of modelling calling frogs, and several fruitful extensions are proposed...

  19. An Incremental Model for Cloud Adoption: Based on a Study of Regional Organizations

    Directory of Open Access Journals (Sweden)

    Emre Erturk

    2017-11-01

    Full Text Available Many organizations that use cloud computing services intend to increase this commitment. A survey was distributed to organizations in Hawke’s Bay, New Zealand to understand their adoption of cloud solutions, in comparison with global trends and practices. The survey also included questions on the benefits and challenges, and which delivery model(s they have adopted and are planning to adopt. One aim is to contribute to the cloud computing literature and build on the existing adoption models. This study also highlights additional aspects applicable to various organizations (small, medium, large and regional. Finally, recommendations are provided for related future research projects.

  20. A study on a new algorithm to optimize ball mill system based on modeling and GA

    International Nuclear Information System (INIS)

    Wang Heng; Jia Minping; Huang Peng; Chen Zuoliang

    2010-01-01

    Aiming at the disadvantage of conventional optimization method for ball mill pulverizing system, a novel approach based on RBF neural network and genetic algorithm was proposed in the present paper. Firstly, the experiments and measurement for fill level based on vibration signals of mill shell was introduced. Then, main factors which affected the power consumption of ball mill pulverizing system were analyzed, and the input variables of RBF neural network were determined. RBF neural network was used to map the complex non-linear relationship between the electric consumption and process parameters and the non-linear model of power consumption was built. Finally, the model was optimized by genetic algorithm and the optimal work conditions of ball mill pulverizing system were determined. The results demonstrate that the method is reliable and practical, and can reduce the electric consumption obviously and effectively.

  1. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  2. Gasification under CO2–Steam Mixture: Kinetic Model Study Based on Shared Active Sites

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2017-11-01

    Full Text Available In this work, char gasification of two coals (i.e., Shenfu bituminous coal and Zunyi anthracite and a petroleum coke under a steam and CO2 mixture (steam/CO2 partial pressures, 0.025–0.075 MPa; total pressures, 0.100 MPa and CO2/steam chemisorption of char samples were conducted in a Thermogravimetric Analyzer (TGA. Two conventional kinetic models exhibited difficulties in exactly fitting the experimental data of char–steam–CO2 gasification. Hence, a modified model based on Langmuir–Hinshelwood model and assuming that char–CO2 and char–steam reactions partially shared active sites was proposed and had indicated high accuracy for estimating the interactions in char–steam–CO2 reaction. Moreover, it was found that two new model parameters (respectively characterized as the amount ratio of shared active sites to total active sites in char–CO2 and char–steam reactions in the modified model hardly varied with gasification conditions, and the results of chemisorption indicate that these two new model parameters mainly depended on the carbon active sites in char samples.

  3. Model-free stochastic processes studied with q-wavelet-based informational tools

    International Nuclear Information System (INIS)

    Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2007-01-01

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework

  4. Which model based on fluorescence quenching is suitable to study the interaction between trans-resveratrol and BSA?

    Science.gov (United States)

    Wei, Xin Lin; Xiao, Jian Bo; Wang, Yuanfeng; Bai, Yalong

    2010-01-01

    There are several models by means of quenching fluorescence of BSA to determine the binding parameters. The binding parameters obtained from different models are quite different from each other. Which model is suitable to study the interaction between trans-resveratrol and BSA? Herein, twelve models based fluorescence quenching of BSA were compared. The number of binding sites increasing with increased binding constant for similar compounds binding to BSA maybe one approach to resolve this question. For example, here eleven flavonoids were tested to illustrate that the double logarithm regression curve is suitable to study binding polyphenols to BSA.

  5. Agent-Based Model to Study and Quantify the Evolution Dynamics of Android Malware Infection

    Directory of Open Access Journals (Sweden)

    Juan Alegre-Sanahuja

    2014-01-01

    Full Text Available In the last years the number of malware Apps that the users download to their devices has risen. In this paper, we propose an agent-based model to quantify the Android malware infection evolution, modeling the behavior of the users and the different markets where the users may download Apps. The model predicts the number of infected smartphones depending on the type of malware. Additionally, we will estimate the cost that the users should afford when the malware is in their devices. We will be able to analyze which part is more critical: the users, giving indiscriminate permissions to the Apps or not protecting their devices with antivirus software, or the Android platform, due to the vulnerabilities of the Android devices that permit their rooted. We focus on the community of Valencia, Spain, although the obtained results can be extrapolated to other places where the number of Android smartphones remains fairly stable.

  6. Study of visualized simulation and analysis of nuclear fuel cycle system based on multilevel flow model

    International Nuclear Information System (INIS)

    Liu Jingquan; Yoshikawa, H.; Zhou Yangping

    2005-01-01

    Complex energy and environment system, especially nuclear fuel cycle system recently raised social concerns about the issues of economic competitiveness, environmental effect and nuclear proliferation. Only under the condition that those conflicting issues are gotten a consensus between stakeholders with different knowledge background, can nuclear power industry be continuingly developed. In this paper, a new analysis platform has been developed to help stakeholders to recognize and analyze various socio-technical issues in the nuclear fuel cycle sys- tem based on the functional modeling method named Multilevel Flow Models (MFM) according to the cognition theory of human being, Its character is that MFM models define a set of mass, energy and information flow structures on multiple levels of abstraction to describe the functional structure of a process system and its graphical symbol representation and the means-end and part-whole hierarchical flow structure to make the represented process easy to be understood. Based upon this methodology, a micro-process and a macro-process of nuclear fuel cycle system were selected to be simulated and some analysis processes such as economics analysis, environmental analysis and energy balance analysis related to those flows were also integrated to help stakeholders to understand the process of decision-making with the introduction of some new functions for the improved Multilevel Flow Models Studio, and finally the simple simulation such as spent fuel management process simulation and money flow of nuclear fuel cycle and its levelised cost analysis will be represented as feasible examples. (authors)

  7. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  8. Study of visualized simulation and analysis of nuclear fuel cycle system based on multilevel flow model

    Institute of Scientific and Technical Information of China (English)

    LIU Jing-Quan; YOSHIKAWA Hidekazu; ZHOU Yang-Ping

    2005-01-01

    Complex energy and environment system, especially nuclear fuel cycle system recently raised social concerns about the issues of economic competitiveness, environmental effect and nuclear proliferation. Only under the condition that those conflicting issues are gotten a consensus between stakeholders with different knowledge background, can nuclear power industry be continuingly developed. In this paper, a new analysis platform has been developed to help stakeholders to recognize and analyze various socio-technical issues in the nuclear fuel cycle system based on the functional modeling method named Multilevel Flow Models (MFM) according to the cognition theory of human being. Its character is that MFM models define a set of mass, energy and information flow structures on multiple levels of abstraction to describe the functional structure of a process system and its graphical symbol representation and the means-end and part-whole hierarchical flow structure to make the represented process easy to be understood. Based upon this methodology, a micro-process and a macro-process of nuclear fuel cycle system were selected to be simulated and some analysis processes such as economics analysis, environmental analysis and energy balance analysis related to those flows were also integrated to help stakeholders to understand the process of decision-making with the introduction of some new functions for the improved Multilevel Flow Models Studio, and finally the simple simulation such as spent fuel management process simulation and money flow of nuclear fuel cycle and its levelised cost analysis will be represented as feasible examples.

  9. GIS-based modelling of odour emitted from the waste processing plant: case study

    Directory of Open Access Journals (Sweden)

    Sόwka Izabela

    2017-01-01

    Full Text Available The emission of odours into the atmospheric air from the municipal economy and industrial plants, especially in urbanized areas, causes a serious problem, which the mankind has been struggling with for years. The excessive exposure of people to odours may result in many negative health effects, including, for example, headaches and vomiting. There are many different methods that are used in order to evaluate the odour nuisance. The results obtained through those methods can then be used to carry out a visualization and an analysis of a distribution of the odour concentrations in a given area by using the GIS (Geographic Information System. By their application to the spatial analysis of the impact of odours, we can enable the assessment of the magnitude and likelihood of the occurrence of odour nuisance. Modelling using GIS tools and spatial interpolation like IDW method and kriging can provide an alternative to the standard modelling tools, which generally use the emission values from sources that are identified as major emitters of odours. The work presents the result, based on the odour measurements data from waste processing plant, of the attempt to connect two different tools – the reference model OPERAT FB and GIS-based dispersion modelling performed using IDW method and ordinary kriging to analyse their behaviour in terms of limited observation values.

  10. Study on Maritime Logistics Warehousing Center Model and Precision Marketing Strategy Optimization Based on Fuzzy Method and Neural Network Model

    Directory of Open Access Journals (Sweden)

    Xiao Kefeng

    2017-08-01

    Full Text Available The bulk commodity, different with the retail goods, has a uniqueness in the location selection, the chosen of transportation program and the decision objectives. How to make optimal decisions in the facility location, requirement distribution, shipping methods and the route selection and establish an effective distribution system to reduce the cost has become a burning issue for the e-commerce logistics, which is worthy to be deeply and systematically solved. In this paper, Logistics warehousing center model and precision marketing strategy optimization based on fuzzy method and neural network model is proposed to solve this problem. In addition, we have designed principles of the fuzzy method and neural network model to solve the proposed model because of its complexity. Finally, we have solved numerous examples to compare the results of lingo and Matlab, we use Matlab and lingo just to check the result and to illustrate the numerical example, we can find from the result, the multi-objective model increases logistics costs and improves the efficiency of distribution time.

  11. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  12. Study on the combined influence of battery models and sizing strategy for hybrid and battery-based electric vehicles

    DEFF Research Database (Denmark)

    Pinto, Cláudio; Barreras, Jorge V.; de Castro, Ricardo

    2017-01-01

    This paper presents a study of the combined influence of battery models and sizing strategy for hybrid and battery-based electric vehicles. In particular, the aim is to find the number of battery (and supercapacitor) cells to propel a light vehicle to run two different standard driving cycles....... Despite the same tendency, when a hybrid vehicle is taken into account, the influence of the battery models is dependent on the sizing strategy. In this work, two sizing strategies are evaluated: dynamic programming and filter-based. For the latter, the complexity of the battery model has a clear....... Three equivalent circuit models are considered to simulate the battery electrical performance: linear static, non-linear static and non-linear with first-order dynamics. When dimensioning a battery-based vehicle, less complex models may lead to a solution with more battery cells and higher costs...

  13. Pavlovian disgust conditioning as a model for contamination-based OCD: Evidence from an analogue study.

    Science.gov (United States)

    Armstrong, Thomas; Olatunji, Bunmi O

    2017-06-01

    Pavlovian fear conditioning provides a model for anxiety-related disorders, including obsessive-compulsive disorder (OCD). However, disgust is the predominant emotional response to contamination, which is a common theme in OCD. The present study sought to identify disgust conditioning abnormalities that may underlie excessive contamination concerns relevant to OCD. Individuals high and low in contamination concern (HCC, n = 32; LCC, n = 30) completed an associative learning task in which one neutral face (conditioned stimulus; CS+) was followed by a disgusting image (unconditioned stimulus; US) and another neutral face (CS-) was unreinforced. Following this acquisition procedure, there was an extinction procedure in which both CSs were presented unreinforced. The groups did not show significant differences in discriminant responding to the CSs following acquisition. However, following extinction, the HCC group reported less reduction in their expectancy of the US following the CS+, and also reported greater disgust to the CS+, compared to the LCC group. Increased disgust to the CS+ following both acquisition and extinction was correlated with increased symptoms of contamination-based OCD and increased disgust sensitivity. Additionally, disgust sensitivity mediated group differences in disgust responding to the CS+ at acquisition and extinction. Also, failure to adjust US expectancy in response to extinction partially mediated group differences in disgust to the CS+ following extinction. Together, these findings suggest that excessive contamination concerns observed in OCD may be related to difficulty inhibiting acquired disgust, possibly due to elevated disgust sensitivity that characterizes the disorder. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. An Experimental Study on Mechanical Modeling of Ceramics Based on Microstructure

    Directory of Open Access Journals (Sweden)

    Ya-Nan Zhang

    2015-11-01

    Full Text Available The actual grinding result of ceramics has not been well predicted by the present mechanical models. No allowance is made for direct effects of materials microstructure and almost all the mechanical models were obtained based on crystalline ceramics. In order to improve the mechanical models of ceramics, surface grinding experiments on crystalline ceramics and non-crystalline ceramics were conducted in this research. The normal and tangential grinding forces were measured to calculate single grit force and specific grinding energy. Grinding surfaces were observed. For crystalline alumina ceramics, the predictive modeling of normal force per grit fits well with the experimental result, when the maximum undeformed chip thickness is less than a critical depth, which turns out to be close to the grain size of alumina. Meanwhile, there is a negative correlation between the specific grinding energy and the maximum undeformed chip thickness. With the decreasing maximum undeformed chip thickness, the proportions of ductile removal and transgranular fracture increase. However, the grinding force models are not applicable for non-crystalline ceramic fused silica and the specific grinding energy fluctuates irregularly as a function of maximum undeformed chip thickness seen from the experiment.

  15. Study of Heat Flux Threshold and Perturbation Effect on Transport Barrier Formation Based on Bifurcation Model

    International Nuclear Information System (INIS)

    Chatthong, B.; Onjun, T.; Imbeaux, F.; Sarazin, Y.; Strugarek, A.; Picha, R.; Poolyarat, N.

    2011-06-01

    Full text: Formation of transport barrier in fusion plasma is studied using a simple one-field bistable S-curve bifurcation model. This model is characterized by an S-line with two stable branches corresponding to the low (L) and high (H) confinement modes, connected by an unstable branch. Assumptions used in this model are such that the reduction in anomalous transport is caused by v E velocity shear effect and also this velocity shear is proportional to pressure gradient. In this study, analytical and numerical approaches are used to obtain necessary conditions for transport barrier formation, i.e. the ratio of anomalous over neoclassical coefficients and heat flux thresholds which must be exceeded. Several profiles of heat sources are considered in this work including constant, Gaussian, and hyperbolic tangent forms. Moreover, the effect of perturbation in heat flux is investigated with respect to transport barrier formation

  16. Conceptual model study using origami for membrane space structures : a perspective of origami-based engineering

    OpenAIRE

    NATORI, M. C.; SAKAMOTO, Hiraku; KATSUMATA, Nobuhisa; YAMAKAWA, Hiroshi; KISHIMOTO, Naoko

    2015-01-01

    This paper discusses what has been found and what will be found using conceptual “origami” models to develop deployable space structures. The study covers the following: (i) one-dimensional structural elements, which are axially buckled inflatable tubes; (ii) two-dimensional elements, which are deployable membranes, such as solar arrays and solar sails; and (iii) deployable elements in nature. The study clarifies what design considerations are necessary to adapt the basic concepts to actual s...

  17. A Comparative Study of the Effects of the Neurocognitive-Based Model and the Conventional Model on Learner Attention, Working Memory and Mood

    Science.gov (United States)

    Srikoon, Sanit; Bunterm, Tassanee; Nethanomsak, Teerachai; Ngang, Tang Keow

    2017-01-01

    Purpose: The attention, working memory, and mood of learners are the most important abilities in the learning process. This study was concerned with the comparison of contextualized attention, working memory, and mood through a neurocognitive-based model (5P) and a conventional model (5E). It sought to examine the significant change in attention,…

  18. Thermodynamic data base needs for modeling studies of the Yucca Mountain project

    International Nuclear Information System (INIS)

    Palmer, C.E.A.; Silva, R.J.; Bucher, J.J.

    1996-01-01

    This document is the first in a series of documents outlining the thermodynamic data needs for performing geochemical modeling calculations in support of various waste package performance assessment activities for the Yucca Mountain Project. The documents are intended to identify and justify the critical thermodynamic data needs for the data base to be used with the models. The Thermodynamic Data Determinations task supplies data needed to resolve performance or design issues and the development of the data base will remain an iterative process as needs change or data improve. For example, data are needed to predict: (1) major ion groundwater chemistry and its evolution, (2) mineral stabilities and evolution, (3) engineered barrier near-field transport and retardation properties, (4) changes in geochemical conditions and processes, (5) solubilities, speciation and transport of waste radionuclides and (6) the dissolution of corrosion of construction and canister materials and the effect on groundwater chemistry and radionuclide solubilities and transport. The system is complex and interactive, and data need to be supplied in order to model the changes and their effect on other components of the system, e.g., temperature, pH and redox conditions (Eh). Through sensitivity and uncertainty analyses, the critical data and system parameters will be identified and the acceptable variations in them documented

  19. Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA

    Science.gov (United States)

    Watson, Leela R.; Bauman, William H., III; Hoeth, Brian

    2009-01-01

    This abstract describes work that will be done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting "wind cycling" cases at Edwards Air Force Base, CA (EAFB), in which the wind speeds and directions oscillate among towers near the EAFB runway. The Weather Research and Forecasting (WRF) model allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and determine which configuration will best predict surface wind speed and direction at EAFB.

  20. Springback study in aluminum alloys based on the Demeri Benchmark Test : influence of material model

    International Nuclear Information System (INIS)

    Greze, R.; Laurent, H.; Manach, P. Y.

    2007-01-01

    Springback is a serious problem in sheet metal forming. Its origin lies in the elastic recovery of materials after a deep drawing operation. Springback modifies the final shape of the part when removed from the die after forming. This study deals with Springback in an Al5754-O aluminum alloy. An experimental test similar to the Demeri Benchmark Test has been developed. The experimentally measured Springback is compared to predicted Springback simulation using Abaqus software. Several material models are analyzed, all models using isotropic hardening of Voce type and plasticity criteria such as Von Mises and Hill48's yield criterion

  1. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    Science.gov (United States)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  2. A Study on Standard Competition with Network Effect Based on Evolutionary Game Model

    Science.gov (United States)

    Wang, Ye; Wang, Bingdong; Li, Kangning

    Owing to networks widespread in modern society, standard competition with network effect is now endowed with new connotation. This paper aims to study the impact of network effect on standard competition; it is organized in the mode of "introduction-model setup-equilibrium analysis-conclusion". Starting from a well-structured model of evolutionary game, it is then extended to a dynamic analysis. This article proves both theoretically and empirically that whether or not a standard can lead the market trends depends on the utility it would bring, and the author also discusses some advisable strategies revolving around the two factors of initial position and border break.

  3. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    Science.gov (United States)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  4. A study on fatigue crack growth model considering high mean loading effects based on structural stress

    International Nuclear Information System (INIS)

    Kim, Jong Sung; Kim, Cheol; Jin, Tae Eun; Dong, P.

    2004-01-01

    The mesh-insensitive structural stress procedure by Dong is modified to apply to the welded joints with local thickness variation and inarguable shear/normal stresses along local discontinuity surface. In order to make use of the structural stress based K solution for fatigue correlation of welded joints, a proper crack growth model needs to be developed. There exist some significant discrepancies in inferring the slope or crack growth exponent in the conventional Paris law regime. Two-stage crack growth model was not considered since its applications are focused upon the fatigue behavior in welded joints in which the load ratio effects are considered negligible. In this paper, a two-stage crack growth law considering high mean loading is proposed and proven to be effective in unifying the so-called anomalous short crack growth data

  5. Noninvasive pulmonary artery pressure monitoring by EIT: a model-based feasibility study.

    Science.gov (United States)

    Proença, Martin; Braun, Fabian; Solà, Josep; Thiran, Jean-Philippe; Lemay, Mathieu

    2017-06-01

    Current monitoring modalities for patients with pulmonary hypertension (PH) are limited to invasive solutions. A novel approach for the noninvasive and unsupervised monitoring of pulmonary artery pressure (PAP) in patients with PH was proposed and investigated. The approach was based on the use of electrical impedance tomography (EIT), a noninvasive and safe monitoring technique, and was tested through simulations on a realistic 4D bio-impedance model of the human thorax. Changes in PAP were induced in the model by simulating multiple types of hypertensive conditions. A timing parameter physiologically linked to the PAP via the so-called pulse wave velocity principle was automatically estimated from the EIT data. It was found that changes in PAP could indeed be reliably monitored by EIT, irrespective of the pathophysiological condition that caused them. If confirmed clinically, these findings could open the way for a new generation of noninvasive PAP monitoring solutions for the follow-up of patients with PH.

  6. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    International Nuclear Information System (INIS)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing

  7. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing.

  8. Studies on combined model based on functional objectives of large scale complex engineering

    Science.gov (United States)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  9. Model based approach to Study the Impact of Biofuels on the Sustainability of an Ecological System

    Science.gov (United States)

    The importance and complexity of sustainability has been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between various components of the ecosystem could be nonlinear, intertwined and non intuitive...

  10. A model for self-diffusion of guanidinium-based ionic liquids: a molecular simulation study.

    Science.gov (United States)

    Klähn, Marco; Seduraman, Abirami; Wu, Ping

    2008-11-06

    We propose a novel self-diffusion model for ionic liquids on an atomic level of detail. The model is derived from molecular dynamics simulations of guanidinium-based ionic liquids (GILs) as a model case. The simulations are based on an empirical molecular mechanical force field, which has been developed in our preceding work, and it relies on the charge distribution in the actual liquid. The simulated GILs consist of acyclic and cyclic cations that were paired with nitrate and perchlorate anions. Self-diffusion coefficients are calculated at different temperatures from which diffusive activation energies between 32-40 kJ/mol are derived. Vaporization enthalpies between 174-212 kJ/mol are calculated, and their strong connection with diffusive activation energies is demonstrated. An observed formation of cavities in GILs of up to 6.5% of the total volume does not facilitate self-diffusion. Instead, the diffusion of ions is found to be determined primarily by interactions with their immediate environment via electrostatic attraction between cation hydrogen and anion oxygen atoms. The calculated average time between single diffusive transitions varies between 58-107 ps and determines the speed of diffusion, in contrast to diffusive displacement distances, which were found to be similar in all simulated GILs. All simulations indicate that ions diffuse by using a brachiation type of movement: a diffusive transition is initiated by cleaving close contacts to a coordinated counterion, after which the ion diffuses only about 2 A until new close contacts are formed with another counterion in its vicinity. The proposed diffusion model links all calculated energetic and dynamic properties of GILs consistently and explains their molecular origin. The validity of the model is confirmed by providing an explanation for the variation of measured ratios of self-diffusion coefficients of cations and paired anions over a wide range of values, encompassing various ionic liquid classes

  11. Fatigue crack initiation in nickel-based superalloys studied by microstructure-based FE modeling and scanning electron microscopy

    Directory of Open Access Journals (Sweden)

    Fried M.

    2014-01-01

    Full Text Available In this work stage I crack initiation in polycrystalline nickel-based superalloys is investigated by analyzing anisotropic mechanical properties, local stress concentrations and plastic deformation on the microstructural length scale. The grain structure in the gauge section of fatigue specimens was characterized by EBSD. Based on the measured data, a microstructure-based FE model could be established to simulate the strain and stress distribution in the specimens during the first loading cycle of a fatigue test. The results were in fairly good agreement with experimentally measured local strains. Furthermore, the onset of plastic deformation was predicted by identifying shear stress maxima in the microstructure, presumably leading to activation of slip systems. Measurement of plastic deformation and observation of slip traces in the respective regions of the microstructure confirmed the predicted slip activity. The close relation between micro-plasticity, formation of slip traces and stage I crack initiation was demonstrated by SEM surface analyses of fatigued specimens and an in-situ fatigue test in a large chamber SEM.

  12. Generalized functional linear models for gene-based case-control association studies.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  13. Physiologically based pharmacokinetic toolkit to evaluate environmental exposures: Applications of the dioxin model to study real life exposures

    Energy Technology Data Exchange (ETDEWEB)

    Emond, Claude, E-mail: claude.emond@biosmc.com [BioSimulation Consulting Inc, Newark, DE (United States); Ruiz, Patricia; Mumtaz, Moiz [Division of Toxicology and Human Health Sciences, Agency for Toxic Substances and Disease Registry, Atlanta, GA (United States)

    2017-01-15

    Chlorinated dibenzo-p-dioxins (CDDs) are a series of mono- to octa-chlorinated homologous chemicals commonly referred to as polychlorinated dioxins. One of the most potent, well-known, and persistent member of this family is 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). As part of translational research to make computerized models accessible to health risk assessors, we present a Berkeley Madonna recoded version of the human physiologically based pharmacokinetic (PBPK) model used by the U.S. Environmental Protection Agency (EPA) in the recent dioxin assessment. This model incorporates CYP1A2 induction, which is an important metabolic vector that drives dioxin distribution in the human body, and it uses a variable elimination half-life that is body burden dependent. To evaluate the model accuracy, the recoded model predictions were compared with those of the original published model. The simulations performed with the recoded model matched well with those of the original model. The recoded model was then applied to available data sets of real life exposure studies. The recoded model can describe acute and chronic exposures and can be useful for interpreting human biomonitoring data as part of an overall dioxin and/or dioxin-like compounds risk assessment. - Highlights: • The best available dioxin PBPK model for interpreting human biomonitoring data is presented. • The original PBPK model was recoded from acslX to the Berkeley Madonna (BM) platform. • Comparisons were made of the accuracy of the recoded model with the original model. • The model is a useful addition to the ATSDR's BM based PBPK toolkit that supports risk assessors. • The application of the model to real-life exposure data sets is illustrated.

  14. A study on the radionuclide transport through fractured porous media based on the network resistance model

    International Nuclear Information System (INIS)

    Hwang, Ki Ha

    2000-02-01

    Before the actual construction of radioactive waste repository, analysis of radionuclide transport is required to predict the radiological effect on public and environment. Many models have been developed to predict the realistic radionuclide transport through the repository. In this study, Network Resistance Model (NRM) that is similar to electrical circuit network is adopted to simulate the radionuclide transport. NRM assume the media of repository as the resistance of the radionuclide transport and describes the transport phenomena of radionuclide by connecting the resistance as network. NRM is easy to apply to describe complex system and take less calculation time compared to the other model. The object of this study is to develop the fast, simple and efficient calculation method to simulate the radionuclide with the newly adopted concept using network resistance. New system configuration specially focused on rock edge region is introduced by dividing the rock matrix. By dividing the rock edge from the main rock matrix region, the rock edge region is more carefully analyzed and compared. Rock edge region can accelerate radionuclide transport due to the reducing effect on the total resistivity of rock matrix. Therefore, increased radioactive dose is expected when we apply NRM methodology in the performance assessment of the repository. Result of the performance assessment can be more conservative and reliable. NRM can be applied to other system configuration and for more complex pathways. NRM is simple to us e and easy to modify than any other modeling method

  15. A concordance-based study to assess doctors’ and nurses’ mental models in Internal Medicine

    Science.gov (United States)

    Chan, K. C. Gary; Muller-Juge, Virginie; Cullati, Stéphane; Hudelson, Patricia; Maître, Fabienne; Vu, Nu V.; Savoldelli, Georges L.; Nendaz, Mathieu R.

    2017-01-01

    Interprofessional collaboration between doctors and nurses is based on team mental models, in particular for each professional’s roles. Our objective was to identify factors influencing concordance on the expectations of doctors’ and nurses’ roles and responsibilities in an Internal Medicine ward. Using a dataset of 196 doctor-nurse pairs (14x14 = 196), we analyzed choices and prioritized management actions of 14 doctors and 14 nurses in six clinical nurse role scenarios, and in five doctor role scenarios (6 options per scenario). In logistic regression models with a non-nested correlation structure, we evaluated concordance among doctors and nurses, and adjusted for potential confounders (including prior experience in Internal Medicine, acuteness of case and gender). Concordance was associated with number of female professionals (adjusted OR 1.32, 95% CI 1.02 to 1.73), for acute situations (adjusted OR 2.02, 95% CI 1.13 to 3.62), and in doctor role scenarios (adjusted OR 2.19, 95% CI 1.32 to 3.65). Prior experience and country of training were not significant predictors of concordance. In conclusion, our concordance-based approach helped us identify areas of lower concordance in expected doctor-nurse roles and responsibilities, particularly in non-acute situations, which can be targeted by future interprofessional, educational interventions. PMID:28792524

  16. A concordance-based study to assess doctors' and nurses' mental models in Internal Medicine.

    Directory of Open Access Journals (Sweden)

    Katherine S Blondon

    Full Text Available Interprofessional collaboration between doctors and nurses is based on team mental models, in particular for each professional's roles. Our objective was to identify factors influencing concordance on the expectations of doctors' and nurses' roles and responsibilities in an Internal Medicine ward. Using a dataset of 196 doctor-nurse pairs (14x14 = 196, we analyzed choices and prioritized management actions of 14 doctors and 14 nurses in six clinical nurse role scenarios, and in five doctor role scenarios (6 options per scenario. In logistic regression models with a non-nested correlation structure, we evaluated concordance among doctors and nurses, and adjusted for potential confounders (including prior experience in Internal Medicine, acuteness of case and gender. Concordance was associated with number of female professionals (adjusted OR 1.32, 95% CI 1.02 to 1.73, for acute situations (adjusted OR 2.02, 95% CI 1.13 to 3.62, and in doctor role scenarios (adjusted OR 2.19, 95% CI 1.32 to 3.65. Prior experience and country of training were not significant predictors of concordance. In conclusion, our concordance-based approach helped us identify areas of lower concordance in expected doctor-nurse roles and responsibilities, particularly in non-acute situations, which can be targeted by future interprofessional, educational interventions.

  17. The Evolution of Network-based Business Models Illustrated Through the Case Study of an Entrepreneurship Project

    Directory of Open Access Journals (Sweden)

    Morten Lund

    2014-08-01

    Full Text Available Purpose: Existing frameworks for understanding and analyzing the value configuration and structuring of partnerships in relation such network-based business models are found to be inferior. The purpose of this paper is therefore to broaden our understanding of how business models may change over time and how the role of strategic partners may differ over time too. Design/methodology/approach: A longitudinal case study spanning over years and mobilising multiple qualitative methods such as interviews, observation and participative observation forms the basis of the data collection. Findings: This paper illustrates how a network-based business model arises and evolves and how the forces of a network structure impact the development of its partner relationships. The contribution of this article is to understanding how partners positioned around a business model can be organized into a network-based business model that generates additional value for the core business model and for both the partners and the customers. Research limitations/implications: The results should be taken with caution as they are based on the case study of a single network-based business model. Practical implications: Managers can gain insight into barriers and enablers relating to different types of loose organisations and how to best manage such relationships and interactions Originality/value: This study adds value to the existing literature by reflecting the dynamics created in the interactions between a business model’s strategic partners and how a how a business model can evolve in a series of distinct phases

  18. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  19. Performance metric optimization advocates CPFR in supply chains: A system dynamics model based study

    OpenAIRE

    Balaji Janamanchi; James R. Burns

    2016-01-01

    Background: Supply Chain partners often find themselves in rather helpless positions, unable to improve their firm’s performance and profitability because their partners although willing to share production information do not fully collaborate in tackling customer order variations as they don’t seem to appreciate the benefits of such collaboration. Methods: We use a two-player (supplier-manufacturer) System Dynamics model to study the dynamics to assess the impact and usefulness of supply cha...

  20. Optics Studies for the CERN Proton Synchrotron Machine Linear and Nonlinear Modelling using Beam Based Measurements

    CERN Document Server

    Cappi, R; Martini, M; Métral, Elias; Métral, G; Steerenberg, R; Müller, A S

    2003-01-01

    The CERN Proton Synchrotron machine is built using combined function magnets. The control of the linear tune as well as the chromaticity in both planes is achieved by means of special coils added to the main magnets, namely two pole-face-windings and one figure-of-eight loop. As a result, the overall magnetic field configuration is rather complex not to mention the saturation effects induced at top-energy. For these reasons a linear model of the PS main magnet does not provide sufficient precision to model particle dynamics. On the other hand, a sophisticated optical model is the key element for the foreseen intensity upgrade and, in particular, for the novel extraction mode based on adiabatic capture of beam particles inside stable islands in transverse phase space. A solution was found by performing accurate measurement of the nonlinear tune as a function of both amplitude and momentum offset so to extract both linear and nonlinear properties of the lattice. In this paper the measurement results are present...

  1. Assessment of potential advantages of relevant ions for particle therapy: A model based study

    Energy Technology Data Exchange (ETDEWEB)

    Grün, Rebecca, E-mail: r.gruen@gsi.de [Department of Biophysics, GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt 64291 (Germany); Institute of Medical Physics and Radiation Protection, University of Applied Sciences Gießen, Gießen 35390 (Germany); Medical Faculty of Philipps-University Marburg, Marburg 35032 (Germany); Friedrich, Thomas; Krämer, Michael; Scholz, Michael [Department of Biophysics, GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt 64291 (Germany); Zink, Klemens [Institute of Medical Physics and Radiation Protection, University of Applied Sciences Gießen, Gießen 35390, Germany and Department of Radiotherapy and Radiation Oncology, University Medical Center Giessen and Marburg, Marburg 35043 (Germany); Durante, Marco [Department of Biophysics, GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt 64291, Germany and Department of Condensed Matter Physics, Darmstadt University of Technology, Darmstadt 64289 (Germany); Engenhart-Cabillic, Rita [Medical Faculty of Philipps-University Marburg, Marburg 35032, Germany and Department of Radiotherapy and Radiation Oncology, University Medical Center Giessen and Marburg, Marburg 35043 (Germany)

    2015-02-15

    Purpose: Different ion types offer different physical and biological advantages for therapeutic applications. The purpose of this work is to assess the advantages of the most commonly used ions in particle therapy, i.e., carbon ({sup 12}C), helium ({sup 4}He), and protons ({sup 1}H) for different treatment scenarios. Methods: A treatment planning analysis based on idealized target geometries was performed using the treatment planning software TRiP98. For the prediction of the relative biological effectiveness (RBE) that is required for biological optimization in treatment planning the local effect model (LEM IV) was used. To compare the three ion types, the peak-to-entrance ratio (PER) was determined for the physical dose (PER{sub PHY} {sub S}), the RBE (PER{sub RBE}), and the RBE-weighted dose (PER{sub BIO}) resulting for different dose-levels, field configurations, and tissue types. Further, the dose contribution to artificial organs at risk (OAR) was assessed and a comparison of the dose distribution for the different ion types was performed for a patient with chordoma of the skull base. Results: The study showed that the advantages of the ions depend on the physical and biological properties and the interplay of both. In the case of protons, the consideration of a variable RBE instead of the clinically applied generic RBE of 1.1 indicates an advantage in terms of an increased PER{sub RBE} for the analyzed configurations. Due to the fact that protons show a somewhat better PER{sub PHY} {sub S} compared to helium and carbon ions whereas helium shows a higher PER{sub RBE} compared to protons, both protons and helium ions show a similar RBE-weighted dose distribution. Carbon ions show the largest variation of the PER{sub RBE} with tissue type and a benefit for radioresistant tumor types due to their higher LET. Furthermore, in the case of a two-field irradiation, an additional gain in terms of PER{sub BIO} is observed when using an orthogonal field configuration

  2. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    Science.gov (United States)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  3. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  4. Feasibility study on a microwave-based sensor for measuring hydration level using human skin models

    OpenAIRE

    Brendtke, R.; Wiehl, M.; Groeber, F.; Schwarz, T.; Walles, H.; Hansmann, J.

    2016-01-01

    Tissue dehydration results in three major types of exsiccosis--hyper-, hypo-, or isonatraemia. All three types entail alterations of salt concentrations leading to impaired biochemical processes, and can finally cause severe morbidity. The aim of our study was to demonstrate the feasibility of a microwave-based sensor technology for the non-invasive measurement of the hydration status. Electromagnetic waves at high frequencies interact with molecules, especially water. Hence, if a sample cont...

  5. Training Post-9/11 Police Officers with a Counter-Terrorism Reality-Based Training Model: A Case Study

    Science.gov (United States)

    Biddle, Christopher J.

    2013-01-01

    The purpose of this qualitative holistic multiple-case study was to identify the optimal theoretical approach for a Counter-Terrorism Reality-Based Training (CTRBT) model to train post-9/11 police officers to perform effectively in their counter-terrorism assignments. Post-9/11 police officers assigned to counter-terrorism duties are not trained…

  6. A Comparative Study on Satellite- and Model-Based Crop Phenology in West Africa

    Directory of Open Access Journals (Sweden)

    Elodie Vintrou

    2014-02-01

    Full Text Available Crop phenology is essential for evaluating crop production in the food insecure regions of West Africa. The aim of the paper is to study whether satellite observation of plant phenology are consistent with ground knowledge of crop cycles as expressed in agro-simulations. We used phenological variables from a MODIS Land Cover Dynamics (MCD12Q2 product and examined whether they reproduced the spatio-temporal variability of crop phenological stages in Southern Mali. Furthermore, a validated cereal crop growth model for this region, SARRA-H (System for Regional Analysis of Agro-Climatic Risks, provided precise agronomic information. Remotely-sensed green-up, maturity, senescence and dormancy MODIS dates were extracted for areas previously identified as crops and were compared with simulated leaf area indices (LAI temporal profiles generated using the SARRA-H crop model, which considered the main cropping practices. We studied both spatial (eight sites throughout South Mali during 2007 and temporal (two sites from 2002 to 2008 differences between simulated crop cycles and determined how the differences were indicated in satellite-derived phenometrics. The spatial comparison of the phenological indicator observations and simulations showed mainly that (i the satellite-derived start-of-season (SOS was detected approximately 30 days before the model-derived SOS; and (ii the satellite-derived end-of-season (EOS was typically detected 40 days after the model-derived EOS. Studying the inter-annual difference, we verified that the mean bias was globally consistent for different climatic conditions. Therefore, the land cover dynamics derived from the MODIS time series can reproduce the spatial and temporal variability of different start-of-season and end-of-season crop species. In particular, we recommend simultaneously using start-of-season phenometrics with crop models for yield forecasting to complement commonly used climate data and provide a better

  7. Modeling study of droplet behavior during blowdown period of large break LOCA based on experimental data

    International Nuclear Information System (INIS)

    Sakaba, Hiroshi; Umezawa, Shigemitsu; Teramae, Tetsuya; Furukawa, Yuji

    2004-01-01

    During LOCA (Loss Of Coolant Accident) in PWR, droplets behavior during blowdown period is one of the important phenomena. For example, the spattering from falling liquid film that flows from upper plenum generates those droplets in core region. The behavior of droplets in such flow has strong effect for cladding temperature behavior because these droplets are able to remove heat from a reactor core by its direct contact on fuel rods and its evaporation at the surface. For safety analysis of LOCA in PWR, it is necessary to evaluate droplet diameter precisely in order to predict fuel cladding temperature changing by the calculation code. Based on the test results, a new droplet behavior model was developed for the MCOBRA/TRC code that predicts the droplet behavior during such LOCA events. Furthermore, the verification calculations that simulated some blowdown tests were performed using by the MCOBRA/TRAC code. These results indicated the validity of this droplet model during blow down cooling period. The experiment was focused on investigating the Weber number of steady droplet in the blow down phenomenon of large break LOCA. (author)

  8. An Exploratory Study of the Butterfly Effect Using Agent-Based Modeling

    Science.gov (United States)

    Khasawneh, Mahmoud T.; Zhang, Jun; Shearer, Nevan E. N.; Rodriquez-Velasquez, Elkin; Bowling, Shannon R.

    2010-01-01

    This paper provides insights about the behavior of chaotic complex systems, and the sensitive dependence of the system on the initial starting conditions. How much does a small change in the initial conditions of a complex system affect it in the long term? Do complex systems exhibit what is called the "Butterfly Effect"? This paper uses an agent-based modeling approach to address these questions. An existing model from NetLogo library was extended in order to compare chaotic complex systems with near-identical initial conditions. Results show that small changes in initial starting conditions can have a huge impact on the behavior of chaotic complex systems. The term the "butterfly effect" is attributed to the work of Edward Lorenz [1]. It is used to describe the sensitive dependence of the behavior of chaotic complex systems on the initial conditions of these systems. The metaphor refers to the notion that a butterfly flapping its wings somewhere may cause extreme changes in the ecological system's behavior in the future, such as a hurricane.

  9. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    Science.gov (United States)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  10. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Energy Technology Data Exchange (ETDEWEB)

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  11. The study of infrared target recognition at sea background based on visual attention computational model

    Science.gov (United States)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  12. A conflict-based model of color categorical perception: evidence from a priming study.

    Science.gov (United States)

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  13. A DISTANCE EDUCATION MODEL FOR JORDANIAN STUDENTS BASED ON AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ahmad SHAHER MASHHOUR

    2007-04-01

    Full Text Available Distance education is expanding worldwide. Numbers of students enrolled in distance education are increasing at very high rates. Distance education is said to be the future of education because it addresses educational needs of the new millennium. This paper represents the findings of an empirical study on a sample of Jordanian distance education students into a requirement model that addresses the need of such education at the national level. The responses of the sample show that distance education is offering a viable and satisfactory alternative to those who cannot enroll in regular residential education. The study also shows that the shortcomings of the regular and the current form of distance education in Jordan can be overcome by the use of modern information technology.

  14. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  15. Fog Simulations Based on Multi-Model System: A Feasibility Study

    Science.gov (United States)

    Shi, Chune; Wang, Lei; Zhang, Hao; Zhang, Su; Deng, Xueliang; Li, Yaosun; Qiu, Mingyan

    2012-05-01

    Accurate forecasts of fog and visibility are very important to air and high way traffic, and are still a big challenge. A 1D fog model (PAFOG) is coupled to MM5 by obtaining the initial and boundary conditions (IC/BC) and some other necessary input parameters from MM5. Thus, PAFOG can be run for any area of interest. On the other hand, MM5 itself can be used to simulate fog events over a large domain. This paper presents evaluations of the fog predictability of these two systems for December of 2006 and December of 2007, with nine regional fog events observed in a field experiment, as well as over a large domain in eastern China. Among the simulations of the nine fog events by the two systems, two cases were investigated in detail. Daily results of ground level meteorology were validated against the routine observations at the CMA observational network. Daily fog occurrences for the two study periods was validated in Nanjing. General performance of the two models for the nine fog cases are presented by comparing with routine and field observational data. The results of MM5 and PAFOG for two typical fog cases are verified in detail against field observations. The verifications demonstrated that all methods tended to overestimate fog occurrence, especially for near-fog cases. In terms of TS/ETS, the LWC-only threshold with MM5 showed the best performance, while PAFOG showed the worst. MM5 performed better for advection-radiation fog than for radiation fog, and PAFOG could be an alternative tool for forecasting radiation fogs. PAFOG did show advantages over MM5 on the fog dissipation time. The performance of PAFOG highly depended on the quality of MM5 output. The sensitive runs of PAFOG with different IC/BC showed the capability of using MM5 output to run the 1D model and the high sensitivity of PAFOG on cloud cover. Future works should intensify the study of how to improve the quality of input data (e.g. cloud cover, advection, large scale subsidence) for the 1D

  16. Empirically Based Composite Fracture Prediction Model From the Global Longitudinal Study of Osteoporosis in Postmenopausal Women (GLOW)

    Science.gov (United States)

    Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.

    2014-01-01

    Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345

  17. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  18. Striving for evidence-based practice innovations through a hybrid model journal club: A pilot study.

    Science.gov (United States)

    Wilson, Marian; Ice, Suzanna; Nakashima, Cathy Y; Cox, Lynn Annette; Morse, Elizabeth C; Philip, Ginu; Vuong, Ellen

    2015-05-01

    The purpose of this study was to pilot a "hybrid" style journal club and determine whether measurable effects could be detected over 8-weeks' time on evidence-based practice ability, desire, behaviors, use, and barriers. Journal clubs have been suggested as a method to increase nurses' confidence with using research evidence to guide practice. However, it is yet unknown how nurse educators can best implement effective programs for clinicians with varying schedules, education levels, and research skills. Thirty-six participants from one large urban United States hospital (72% registered nurses) were invited to access bi-weekly interdisciplinary journal club activities. Nurse educators created curriculum focused on clinical problem solving that was offered via in-person sessions or a social media site. A pretest-posttest no control group design was used to measure impacts of those engaged in journal club activities. Data were collected using a combination of validated evidence-based practice instruments and program participation records. A two-tailed paired t test showed significant increases over 8weeks' time in evidence-based practice use (p=.002) and behaviors (p=.007). Slight preference for in-person sessions was reported, although greater participation was reflected in online activities. Mean satisfaction ratings were high; however, attrition rates suggest that more is needed to maximize clinician engagement. A hybrid method using online and in-person sessions was feasible and adaptive for varying learning styles and work schedules. Positive changes in measurements were detected among journal club participants. Instruments were identified that may be useful for trialing similar programs intended to increase evidence-based practice self-efficacy, use, behaviors, and ability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Pt based PEMFC catalysts prepared from colloidal particle suspensions--a toolbox for model studies.

    Science.gov (United States)

    Speder, Jozsef; Altmann, Lena; Roefzaad, Melanie; Bäumer, Marcus; Kirkensgaard, Jacob J K; Mortensen, Kell; Arenz, Matthias

    2013-03-14

    A colloidal synthesis approach is presented that allows systematic studies of the properties of supported proton exchange membrane fuel cell (PEMFC) catalysts. The applied synthesis route is based on the preparation of monodisperse nanoparticles in the absence of strong binding organic stabilizing agents. No temperature post-treatment of the catalyst is required rendering the synthesis route ideally suitable for comparative studies. We report work concerning a series of catalysts based on the same colloidal Pt nanoparticle (NP) suspension, but with different high surface area (HSA) carbon supports. It is shown that for the prepared catalysts the carbon support has no catalytic co-function, but carbon pre-treatment leads to enhanced sticking of the Pt NPs on the support. An unwanted side effect, however, is NP agglomeration during synthesis. By contrast, enhanced NP sticking without agglomeration can be accomplished by the addition of an ionomer to the NP suspension. The catalytic activity of the prepared catalysts for the oxygen reduction reaction is comparable to industrial catalysts and no influence of the particle size is found in the range of 2-5 nm.

  20. [Study on ecological suitability regionalization of Corni Fructus based on Maxent and ArcGIS model].

    Science.gov (United States)

    Zhang, Fei; Chen, Sui-Qing; Wang, Li-Li; Zhang, Tao; Zhang, Xiao-Bo; Zhu, Shou-Dong

    2017-08-01

    Through planting regionalization the scientific basis for planting area of high-quality medicinal materials was predicted. Through interview investigation and field survey, the distribution information of Corni Fructus in China was collected,and 89 sampling point from 14 producing areas were collected. Climate and topography of Corni Fructus were analyzed, the ecological adaptability of study was conducted based on ArcGIS and Maxent. Different suitability grade at potential areas and regionalization map were formulated. There are nine ecological factors affecting the growth of Corni Fructus, for example precipitation in November and March and vegetation type. The results showed that the most suitable habitats are Henan, Shaanxi, Zhejiang, Chongqing, Hubei, Sichuan, Anhui, Hunan and Shandong province. Using the spatial analysis method,the study not only illustrates the most suitable for the surroundings of Corni Fructus,but also provides a scientific reference for wild resource tending, introduction and cultivation, and artificial planting base and directing production layout. Copyright© by the Chinese Pharmaceutical Association.

  1. Study on heat transfer and hydraulic model of spiral-fin fuel rods based on equivalent annulus method

    International Nuclear Information System (INIS)

    Zhang Dan; Liu Changwen; Lu Jianchao

    2011-01-01

    Tight lattice fuel assembly usually adopts spiral-fin fuel elements. Compared with the traditional PWR fuel rods, the closely packed and spiral fin spacers make the heat transfer and hydraulic phenomena in sub-channels very complicated, and: there was no suitable model and correlation to study it. This paper studied the effect of spiral spacers on the channel geometry in the equivalent annulus and physical performance based on the Rehme equivalent annulus methods, and the heat transfer of the spiral fin fuel rods and hydraulic model were obtained. The new model was verified with the traditional one, and the verification showed that two new models agreed well, which could provide certain theoretical explanation to the effect of the spiral spacer on the thermal hydraulics. (authors)

  2. An ecological model to factors associated with booster seat use: A population based study.

    Science.gov (United States)

    Shimony Kanat, Sarit; Gofin, Rosa

    2017-11-01

    Belt-positioning booster seat use (BPB) is an effective technology to prevent severe child injury in cases of car crash. However, in many countries, age-appropriate car restraint use for children aged 4-7 years old remains the lowest among all age groups. The aim of this study was to identify the main determinants of BPB use through a comprehensive approach. An ecological model was used to analyze individual, parent-child relationships, and neighborhood characteristics. Parents of children enrolled in the first and second grades completed a self-reported questionnaire (n=745). The data were subjected to multilevel modeling. The first level examined individual and parent-child relationship variables; in addition the second level tested between neighborhood variance. According to parental self- reports, 56.6% of their children had used a BPB on each car trip during the previous month. The results indicated that the determinants positively related to BPB use were individual and parental; namely, the number of children in the family, the parents' car seat belt use, parental knowledge of children's car safety principles, and a highly authoritative parenting style. Children's temperaments and parental supervision were not associated with BPB use. At the neighborhood level, a small difference was found between neighborhoods for BPB users compared to non-users. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Modeling and cellular studies

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    Testing the applicability of mathematical models with carefully designed experiments is a powerful tool in the investigations of the effects of ionizing radiation on cells. The modeling and cellular studies complement each other, for modeling provides guidance for designing critical experiments which must provide definitive results, while the experiments themselves provide new input to the model. Based on previous experimental results the model for the accumulation of damage in Chlamydomonas reinhardi has been extended to include various multiple two-event combinations. Split dose survival experiments have shown that models tested to date predict most but not all the observed behavior. Stationary-phase mammalian cells, required for tests of other aspects of the model, have been shown to be at different points in the cell cycle depending on how they were forced to stop proliferating. These cultures also demonstrate different capacities for repair of sublethal radiation damage

  4. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  5. Study on the hydrogen demand in China based on system dynamics model

    International Nuclear Information System (INIS)

    Ma, Tao; Ji, Jie; Chen, Ming-qi

    2010-01-01

    Reasonable estimation of hydrogen energy and other renewable energy demand of China's medium and long-term energy is of great significance for China's medium and long-term energy plan. Therefore, based on both China's future economic development and relative economic theory and system dynamics theory, this article analyzes qualitatively the internal factors and external factors of hydrogen energy demand system, and makes the state high and low two assumptions about China's medium and long-term hydrogen demand according to the different speed of China's economic development. After the system dynamic model setting up export and operation, the output shows the data changes of the total hydrogen demand and the four kinds of hydrogen demand. According to the analysis of the output, two conclusions are concluded: The secondary industry, not the tertiary industry (mainly the transportation), should be firstly satisfied by the hydrogen R and D and support of Government policy. Change of Chinese hydrogen demand scale, on basis of its economic growth, can not be effective explained through Chinese economic growth rate, and other influencing factor and mechanism should be probed deeply. (author)

  6. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  7. Motor circuit computer model based on studies of functional Nuclear Magnetic Resonance

    International Nuclear Information System (INIS)

    Garcia Ramo, Karla Batista; Rodriguez Rojas, Rafael; Carballo Barreda, Maylen

    2012-01-01

    The basal ganglia are a complex network of subcortical nuclei involved in motor control, sensorimotor integration, and cognitive processes. Their functioning and interaction with other cerebral structures remains as a subject of debate. The aim of the present work was to simulate the basal ganglia-thalamus-cortex circuitry interaction in motor program selection, supported by functional connectivity pattern obtained by functional nuclear magnetic resonance imaging. Determination of connections weights between neural populations by functional magnetic resonance imaging, contributed to a more realistic formulation of the model; and consequently to obtain similar results to clinical and experimental data. The network allowed to describe the participation of the basal ganglia in motor program selection and the changes in Parkinson disease. The simulation allowed to demonstrate that dopamine depletion above to 40 % leads to a loss of action selection capability, and to reflect the system adaptation ability to compensate dysfunction in Parkinson disease, coincident with experimental and clinical studies

  8. Experimental transmission electron microscopy studies and phenomenological model of bismuth-based superconducting compounds

    International Nuclear Information System (INIS)

    Elboussiri, Khalid

    1991-01-01

    The main part of this thesis is devoted to an experimental study by transmission electron microscopy of the different phases of the superconducting bismuth cuprates Bi_2Sr_2Ca_n_-_1Cu_nO_2_n_+_4. In high resolution electron microscopy, the two types of incommensurate modulation realized in these compounds have been observed. A model of structure has been proposed from which the simulated images obtained are consistent with observations. The medium resolution images correlated with the electron diffraction data have revealed existence of a multi-soliton regime with latent lock in phases of commensurate periods between 4b and 10b. At last, a description of different phases of these compounds as a result of superstructures from a disordered perovskite type structure is proposed (author) [fr

  9. Study on blast furnace cooling stave for various refractory linings based on numerical modeling

    International Nuclear Information System (INIS)

    Mohanty, T R; Sahoo, S K; Moharana, M K

    2016-01-01

    Cooling technology for refractory lining of blast furnace is very important for the metallurgical industry, because it can substantially increase output and operation life of furnaces. A three dimensional mathematical model for the temperature field of the blast furnace stave cooler with refractory lining has been developed and analyzed. The temperature and heat dissipated by stave cooler is examined by using the finite element method. The cast steel stave is studied and computational analysis is made to know the effect of the cooling water velocity, temperature, and the lining material on the maximum temperature of the stave hot surface. The refractory lining materials, which are used in this experiment, are high alumina bricks with different stave materials (copper, aluminum and cast iron). The obtained numerical calculations are compared with that obtained from experiments performed at Rourkela Steel Plant, Odisha taking a stave in belly zone having maximum heat load shows very good agreement. (paper)

  10. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  11. Optical Properties of the Urban Aerosol Particles Obtained from Ground Based Measurements and Satellite-Based Modelling Studies

    Directory of Open Access Journals (Sweden)

    Genrik Mordas

    2015-01-01

    Full Text Available Applications of satellite remote sensing data combined with ground measurements and model simulation were applied to study aerosol optical properties as well as aerosol long-range transport under the impact of large scale circulation in the urban environment in Lithuania (Vilnius. Measurements included the light scattering coefficients at 3 wavelengths (450, 550, and 700 nm measured with an integrating nephelometer and aerosol particle size distribution (0.5–12 μm and number concentration (Dpa > 0.5 μm registered by aerodynamic particle sizer. Particle number concentration and mean light scattering coefficient varied from relatively low values of 6.0 cm−3 and 12.8 Mm−1 associated with air masses passed over Atlantic Ocean to relatively high value of 119 cm−3 and 276 Mm−1 associated with South-Western air masses. Analysis shows such increase in the aerosol light scattering coefficient (276 Mm−1 during the 3rd of July 2012 was attributed to a major Sahara dust storm. Aerosol size distribution with pronounced coarse particles dominance was attributed to the presence of dust particles, while resuspended dust within the urban environment was not observed.

  12. Shrinking lung syndrome as a manifestation of pleuritis: a new model based on pulmonary physiological studies.

    Science.gov (United States)

    Henderson, Lauren A; Loring, Stephen H; Gill, Ritu R; Liao, Katherine P; Ishizawar, Rumey; Kim, Susan; Perlmutter-Goldenson, Robin; Rothman, Deborah; Son, Mary Beth F; Stoll, Matthew L; Zemel, Lawrence S; Sandborg, Christy; Dellaripa, Paul F; Nigrovic, Peter A

    2013-03-01

    The pathophysiology of shrinking lung syndrome (SLS) is poorly understood. We sought to define the structural basis for this condition through the study of pulmonary mechanics in affected patients. Since 2007, most patients evaluated for SLS at our institutions have undergone standardized respiratory testing including esophageal manometry. We analyzed these studies to define the physiological abnormalities driving respiratory restriction. Chest computed tomography data were post-processed to quantify lung volume and parenchymal density. Six cases met criteria for SLS. All presented with dyspnea as well as pleurisy and/or transient pleural effusions. Chest imaging results were free of parenchymal disease and corrected diffusing capacities were normal. Total lung capacities were 39%-50% of predicted. Maximal inspiratory pressures were impaired at high lung volumes, but not low lung volumes, in 5 patients. Lung compliance was strikingly reduced in all patients, accompanied by increased parenchymal density. Patients with SLS exhibited symptomatic and/or radiographic pleuritis associated with 2 characteristic physiological abnormalities: (1) impaired respiratory force at high but not low lung volumes; and (2) markedly decreased pulmonary compliance in the absence of identifiable interstitial lung disease. These findings suggest a model in which pleural inflammation chronically impairs deep inspiration, for example through neural reflexes, leading to parenchymal reorganization that impairs lung compliance, a known complication of persistently low lung volumes. Together these processes could account for the association of SLS with pleuritis as well as the gradual symptomatic and functional progression that is a hallmark of this syndrome.

  13. Model-based development and testing of advertising messages: A comparative study of two campaign proposals based on the MECCAS model and a conventional approach

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino

    2001-01-01

    Traditionally, the development of advertising messages has been based on "creative independence", sometimes catalysed by inductively generated empirical data. Due to the recent intensified focus on advertising effectiveness, this state of affairs is beginning to change. The purpose of the study....... The comparison involved the efficiency of the managerial communication taking place in the message development process as well as target group communication effects. The managerial communication was studied by interviews with the involved advertising agency (Midtmarketing, Ikast, Denmark) and client staff...

  14. Discovering the Power of Individual-Based Modelling in Teaching and Learning: The Study of a Predator-Prey System

    Science.gov (United States)

    Ginovart, Marta

    2014-01-01

    The general aim is to promote the use of individual-based models (biological agent-based models) in teaching and learning contexts in life sciences and to make their progressive incorporation into academic curricula easier, complementing other existing modelling strategies more frequently used in the classroom. Modelling activities for the study…

  15. Study of proton-nucleus collisions at high energies based on the hydrodynamical model

    International Nuclear Information System (INIS)

    Masuda, N.; Weiner, R.M.

    1978-01-01

    We study proton-nucleus collisions at high energies using the one-dimensional hydrodynamical model of Landau with special emphasis on the effect of the size of the target nucleus and of the magnitude of velocity of sound of excited hadronic matter. We convert a collision problem of a proton and a nucleus with a spherical shape into that of a proton and a one-dimensional nuclear tunnel whose length is determined from the average impact parameter. By extending the methods developed by Milekhin and Emelyanov, we obtain the solutions of the hydrodynamical equations of proton-nucleus collisions for arbitrary target tunnel length and arbitrary velocity of sound. The connection between these solutions and observable physical quantities is established as in the work of Cooper, Frye, and Schonberg. Extensive numerical analyses are made at E/sub lab/ = 200 GeV and for the velocity of sound u = 1/√3 of a relativistic ideal Bose gas and u = 1/(7.5)/sup 1/2/ of an interacting Bose gas. In order to compare proton-nucleus collisions with proton-proton collisions, all the analyses are made in the equal-velocity frame. We find the following results. (1) In comparing the number of secondary particles produced in p-A collisions N/sub p/A with those in p-p collisions N/sub p/p, while most of the excess of N/sub p/A over N/sub p/p is concentrated in the backward rapidity region, there exists also an increase of N/sub p/A with A in the forward rapidity region. This result is at variance with the predictions of the energy-flux-cascade model and of the coherent-production model. (2) The excess energies are contained exclusively in the backward region. We also find evidence for new phenomena in proton-nucleus collisions. (3) The existence of an asymmetry of average energies of secondary particles between forward and backward regions, in particular, >> for larger nuclear targets. Thus, energetic particles are predominantly produced in the backward region

  16. Study on a Biometric Authentication Model based on ECG using a Fuzzy Neural Network

    Science.gov (United States)

    Kim, Ho J.; Lim, Joon S.

    2018-03-01

    Traditional authentication methods use numbers or graphic passwords and thus involve the risk of loss or theft. Various studies are underway regarding biometric authentication because it uses the unique biometric data of a human being. Biometric authentication technology using ECG from biometric data involves signals that record electrical stimuli from the heart. It is difficult to manipulate and is advantageous in that it enables unrestrained measurements from sensors that are attached to the skin. This study is on biometric authentication methods using the neural network with weighted fuzzy membership functions (NEWFM). In the biometric authentication process, normalization and the ensemble average is applied during preprocessing, characteristics are extracted using Haar-wavelets, and a registration process called “training” is performed in the fuzzy neural network. In the experiment, biometric authentication was performed on 73 subjects in the Physionet Database. 10-40 ECG waveforms were tested for use in the registration process, and 15 ECG waveforms were deemed the appropriate number for registering ECG waveforms. 1 ECG waveforms were used during the authentication stage to conduct the biometric authentication test. Upon testing the proposed biometric authentication method based on 73 subjects from the Physionet Database, the TAR was 98.32% and FAR was 5.84%.

  17. Validation of a model-based measurement of the minimum insert thickness of knee prostheses: a retrieval study.

    Science.gov (United States)

    van IJsseldijk, E A; Harman, M K; Luetzner, J; Valstar, E R; Stoel, B C; Nelissen, R G H H; Kaptein, B L

    2014-10-01

    Wear of polyethylene inserts plays an important role in failure of total knee replacement and can be monitored in vivo by measuring the minimum joint space width in anteroposterior radiographs. The objective of this retrospective cross-sectional study was to compare the accuracy and precision of a new model-based method with the conventional method by analysing the difference between the minimum joint space width measurements and the actual thickness of retrieved polyethylene tibial inserts. Before revision, the minimum joint space width values and their locations on the insert were measured in 15 fully weight-bearing radiographs. These measurements were compared with the actual minimum thickness values and locations of the retrieved tibial inserts after revision. The mean error in the model-based minimum joint space width measurement was significantly smaller than the conventional method for medial condyles (0.50 vs 0.94 mm, p model-based measurements was less than 10 mm in the medial direction in 12 cases and less in the lateral direction in 13 cases. The model-based minimum joint space width measurement method is more accurate than the conventional measurement with the same precision. Cite this article: Bone Joint Res 2014;3:289-96. ©2014 The British Editorial Society of Bone & Joint Surgery.

  18. Taxi trips distribution modeling based on Entropy-Maximizing theory: A case study in Harbin city-China

    Science.gov (United States)

    Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie

    2018-03-01

    Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.

  19. Measurement-based harmonic current modeling of mobile storage for power quality study in the distribution system

    Directory of Open Access Journals (Sweden)

    Wenge Christoph

    2017-12-01

    Full Text Available Electric vehicles (EVs can be utilized as mobile storages in a power system. The use of battery chargers can cause current harmonics in the supplied AC system. In order to analyze the impact of different EVs with regardto their number and their emission of current harmonics, a generic harmonic current model of EV types was built and implemented in the power system simulation tool PSS®NETOMAC. Based on the measurement data for different types of EVs three standardized harmonic EV models were developed and parametrized. Further, the identified harmonic models are used by the computation of load flow in a modeled, German power distribution system. As a benchmark, a case scenario was studied regarding a high market penetration of EVs in the year 2030 for Germany. The impact of the EV charging on the power distribution system was analyzed and evaluated with valid power quality standards.

  20. Wavelet-based study of valence-arousal model of emotions on EEG signals with LabVIEW.

    Science.gov (United States)

    Guzel Aydin, Seda; Kaya, Turgay; Guler, Hasan

    2016-06-01

    This paper illustrates the wavelet-based feature extraction for emotion assessment using electroencephalogram (EEG) signal through graphical coding design. Two-dimensional (valence-arousal) emotion model was studied. Different emotions (happy, joy, melancholy, and disgust) were studied for assessment. These emotions were stimulated by video clips. EEG signals obtained from four subjects were decomposed into five frequency bands (gamma, beta, alpha, theta, and delta) using "db5" wavelet function. Relative features were calculated to obtain further information. Impact of the emotions according to valence value was observed to be optimal on power spectral density of gamma band. The main objective of this work is not only to investigate the influence of the emotions on different frequency bands but also to overcome the difficulties in the text-based program. This work offers an alternative approach for emotion evaluation through EEG processing. There are a number of methods for emotion recognition such as wavelet transform-based, Fourier transform-based, and Hilbert-Huang transform-based methods. However, the majority of these methods have been applied with the text-based programming languages. In this study, we proposed and implemented an experimental feature extraction with graphics-based language, which provides great convenience in bioelectrical signal processing.

  1. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  2. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Science.gov (United States)

    Lepore, C.; Arnone, E.; Noto, L. V.; Sivandran, G.; Bras, R. L.

    2013-09-01

    This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution), is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS), which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS) to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  3. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Directory of Open Access Journals (Sweden)

    C. Lepore

    2013-09-01

    Full Text Available This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution, is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS, which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  4. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  5. Towards A Model-Based Prognostics Methodology For Electrolytic Capacitors: A Case Study Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter...

  6. An empirical study of the pathology of organizational communications based on three branches model: A case study

    Directory of Open Access Journals (Sweden)

    Mehdi Kheirandish

    2017-12-01

    Full Text Available Understanding obstacles in front of communication system has turned into a critical task executed by managers. Present study analyzes major vulnerabilities to organizational communication from structural, behavioral and contextual aspects. The statistical population includes employees and managers in the headquarters of National Iranian Oil Company. After assessing the validity and reliability of a conceptual model, we used Kolmogorov–Smirnov test, T-test and F-test for analyzing our data. The results show that priority of communication barriers are as follows: structural elements like centrality and formality. Contextual elements like cultural and technical barriers and finally behavioral elements like perceptual and human barriers.

  7. Model-based development and testing of advertising messages: A comparative study of two campaign proposals based on the MECCAS model and a conventional approach

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino

    2001-01-01

    Traditionally, the development of advertising messages has been based on "creative independence", sometimes catalysed by inductively generated empirical data. Due to the recent intensified focus on advertising effectiveness, this state of affairs is beginning to change. The purpose of the study....... The comparison involved the efficiency of the managerial communication taking place in the message development process as well as target group communication effects. The managerial communication was studied by interviews with the involved advertising agency (Midtmarketing, Ikast, Denmark) and client staff...... described in this article is to compare the development and effects of two campaign proposals, with the sommon aim of increasing the consumption of apples among young Danes (18 to 35 years of age). One of the proposals is the result of an inductive-creative process, while the other is base on the MECCAS...

  8. ANFIS-based modelling for photovoltaic power supply system: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Mellit, Adel [Faculty of Sciences and Technology, Department of Electronics, LAMEL, Jijel University, Ouled-Aissa, P.O. Box 98, Jijel 18000 (Algeria); Kalogirou, Soteris A. [Department of Mechanical Engineering and Materials Science and Engineering, Cyprus University of Technology, P.O. Box 50329, Limassol 3603 (Cyprus)

    2011-01-15

    Due to the various seasonal, monthly and daily changes in meteorological data, it is relatively difficult to find a suitable model for Photovoltaic power supply (PVPS) system. This paper deals with the modelling and simulation of a PVPS system using an Adaptive Neuro-Fuzzy Inference Scheme (ANFIS) and the proposition of a new expert configuration PVPS system. For the modelling of the PVPS system, it is required to find suitable models for its different components (ANFIS PV generator, ANFIS battery and ANFIS regulator) that could give satisfactory results under variable climatic conditions in order to test its performance and reliability. A database of measured climate data (global radiation, temperature and humidity) and electrical data (photovoltaic, battery and regulator voltage and current) of a PVPS system installed in Tahifet (south of Algeria) has been recorded for the period from 1992 to 1997. These data have been used for the modelling and simulation of the PVPS system. The results indicated that the reliability and the accuracy of the simulated system are excellent and the correlation coefficient between measured values and those estimated by the ANFIS gave a good prediction accuracy of 98%. Additionally, test results show that the ANFIS performed better than the Artificial Neural Network (ANN), which has also being tried to model the system. In addition, a new configuration of an expert PVPS system is proposed in this work. The predicted electrical data by the ANFIS model can be used for several applications in PV systems. (author)

  9. Mathematical modeling of human glioma growth based on brain topological structures: study of two clinical cases.

    Directory of Open Access Journals (Sweden)

    Cecilia Suarez

    Full Text Available Gliomas are the most common primary brain tumors and yet almost incurable due mainly to their great invasion capability. This represents a challenge to present clinical oncology. Here, we introduce a mathematical model aiming to improve tumor spreading capability definition. The model consists in a time dependent reaction-diffusion equation in a three-dimensional spatial domain that distinguishes between different brain topological structures. The model uses a series of digitized images from brain slices covering the whole human brain. The Talairach atlas included in the model describes brain structures at different levels. Also, the inclusion of the Brodmann areas allows prediction of the brain functions affected during tumor evolution and the estimation of correlated symptoms. The model is solved numerically using patient-specific parametrization and finite differences. Simulations consider an initial state with cellular proliferation alone (benign tumor, and an advanced state when infiltration starts (malign tumor. Survival time is estimated on the basis of tumor size and location. The model is used to predict tumor evolution in two clinical cases. In the first case, predictions show that real infiltrative areas are underestimated by current diagnostic imaging. In the second case, tumor spreading predictions were shown to be more accurate than those derived from previous models in the literature. Our results suggest that the inclusion of differential migration in glioma growth models constitutes another step towards a better prediction of tumor infiltration at the moment of surgical or radiosurgical target definition. Also, the addition of physiological/psychological considerations to classical anatomical models will provide a better and integral understanding of the patient disease at the moment of deciding therapeutic options, taking into account not only survival but also life quality.

  10. Multi-Collinearity Based Model Selection for Landslide Susceptibility Mapping: A Case Study from Ulus District of Karabuk, Turkey

    Science.gov (United States)

    Sahin, E. K.; Colkesen, I., , Dr; Kavzoglu, T.

    2017-12-01

    Identification of localities prone to landslide areas plays an important role for emergency planning, disaster management and recovery planning. Due to its great importance for disaster management, producing accurate and up-to-date landslide susceptibility maps is essential for hazard mitigation purpose and regional planning. The main objective of the present study was to apply multi-collinearity based model selection approach for the production of a landslide susceptibility map of Ulus district of Karabuk, Turkey. It is a fact that data do not contain enough information to describe the problem under consideration when the factors are highly correlated with each other. In such cases, choosing a subset of the original features will often lead to better performance. This paper presents multi-collinearity based model selection approach to deal with the high correlation within the dataset. Two collinearity diagnostic factors (Tolerance (TOL) and the Variance Inflation Factor (VIF)) are commonly used to identify multi-collinearity. Values of VIF that exceed 10.0 and TOL values less than 1.0 are often regarded as indicating multi-collinearity. Five causative factors (slope length, curvature, plan curvature, profile curvature and topographical roughness index) were found highly correlated with each other among 15 factors available for the study area. As a result, the five correlated factors were removed from the model estimation, and performances of the models including the remaining 10 factors (aspect, drainage density, elevation, lithology, land use/land cover, NDVI, slope, sediment transport index, topographical position index and topographical wetness index) were evaluated using logistic regression. The performance of prediction model constructed with 10 factors was compared to that of 15-factor model. The prediction performance of two susceptibility maps was evaluated by overall accuracy and the area under the ROC curve (AUC) values. Results showed that overall

  11. Learning Design of Problem Based Learning Model Based on Recommendations of Sintax Study and Contents Issues on Physics Impulse Materials with Experimental Activities

    Directory of Open Access Journals (Sweden)

    Kristia Agustina

    2017-08-01

    Full Text Available This study aims to design learning Problem Based Learning Model based on syntax study recommendations and content issues on Physics Impulse materials through experiments. This research is a development research with Kemp model. The reference for making the learning design is the result of the syntax study and the content of existing PBL implementation problems from Agustina research. This instructional design is applied to the physics material about Impulse done through experimental activity. Limited trials were conducted on the SWCU Physics Education Study Program students group Salatiga, while the validity test was conducted by high school teachers and physics education lecturers. The results of the trial evaluation are limited and the validity test is used to improve the designs that have been made. The conclusion of this research is the design of learning by using PBL model on Impuls material by referring the result of syntax study and the problem content of existing PBL implementation can be produced by learning activity designed in laboratory experiment activity. The actual problem for Impuls material can be used car crash test video at factory. The results of validation tests and limited trials conducted by researchers assessed that the design of learning made by researchers can be used with small revisions. Suggestions from this research are in making learning design by using PBL model to get actual problem can by collecting news that come from newspaper, YouTube, internet, and television.

  12. Trait-based model development to support breeding programs. A case study for salt tolerance and rice.

    Science.gov (United States)

    Paleari, Livia; Movedi, Ermes; Confalonieri, Roberto

    2017-06-28

    Eco-physiological models are increasingly used to analyze G × E × M interactions to support breeding programs via the design of ideotypes for specific contexts. However, available crop models are only partly suitable for this purpose, since they often lack clear relationships between parameters and traits breeders are working on. Taking salt stress tolerance and rice as a case study, we propose a paradigm shift towards the building of ideotyping-specific models explicitly around traits involved in breeding programs. Salt tolerance is a complex trait relying on different physiological processes that can be alternatively selected to improve the overall crop tolerance. We developed a new model explicitly accounting for these traits and we evaluated its performance using data from growth chamber experiments (e.g., R 2 ranged from 0.74 to 0.94 for the biomass of different plant organs). Using the model, we were able to show how an increase in the overall tolerance can derive from completely different physiological mechanisms according to soil/water salinity dynamics. The study demonstrated that a trait-based approach can increase the usefulness of mathematical models for supporting breeding programs.

  13. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  14. Consumer Adoption of Future MyData-Based Preventive eHealth Services: An Acceptance Model and Survey Study.

    Science.gov (United States)

    Koivumäki, Timo; Pekkarinen, Saara; Lappi, Minna; Väisänen, Jere; Juntunen, Jouni; Pikkarainen, Minna

    2017-12-22

    Constantly increasing health care costs have led countries and health care providers to the point where health care systems must be reinvented. Consequently, electronic health (eHealth) has recently received a great deal of attention in social sciences in the domain of Internet studies. However, only a fraction of these studies focuses on the acceptability of eHealth, making consumers' subjective evaluation an understudied field. This study will address this gap by focusing on the acceptance of MyData-based preventive eHealth services from the consumer point of view. We are adopting the term "MyData", which according to a White Paper of the Finnish Ministry of Transport and Communication refers to "1) a new approach, a paradigm shift in personal data management and processing that seeks to transform the current organization centric system to a human centric system, 2) to personal data as a resource that the individual can access and control." The aim of this study was to investigate what factors influence consumers' intentions to use a MyData-based preventive eHealth service before use. We applied a new adoption model combining Venkatesh's unified theory of acceptance and use of technology 2 (UTAUT2) in a consumer context and three constructs from health behavior theories, namely threat appraisals, self-efficacy, and perceived barriers. To test the research model, we applied structural equation modeling (SEM) with Mplus software, version 7.4. A Web-based survey was administered. We collected 855 responses. We first applied traditional SEM for the research model, which was not statistically significant. We then tested for possible heterogeneity in the data by running a mixture analysis. We found that heterogeneity was not the cause for the poor performance of the research model. Thus, we moved on to model-generating SEM and ended up with a statistically significant empirical model (root mean square error of approximation [RMSEA] 0.051, Tucker-Lewis index [TLI] 0

  15. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    Science.gov (United States)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  16. Is outdoor vector control needed for malaria elimination? An individual-based modelling study.

    Science.gov (United States)

    Zhu, Lin; Müller, Günter C; Marshall, John M; Arheart, Kristopher L; Qualls, Whitney A; Hlaing, WayWay M; Schlein, Yosef; Traore, Sekou F; Doumbia, Seydou; Beier, John C

    2017-07-03

    Residual malaria transmission has been reported in many areas even with adequate indoor vector control coverage, such as long-lasting insecticidal nets (LLINs). The increased insecticide resistance in Anopheles mosquitoes has resulted in reduced efficacy of the widely used indoor tools and has been linked with an increase in outdoor malaria transmission. There are considerations of incorporating outdoor interventions into integrated vector management (IVM) to achieve malaria elimination; however, more information on the combination of tools for effective control is needed to determine their utilization. A spatial individual-based model was modified to simulate the environment and malaria transmission activities in a hypothetical, isolated African village setting. LLINs and outdoor attractive toxic sugar bait (ATSB) stations were used as examples of indoor and outdoor interventions, respectively. Different interventions and lengths of efficacy periods were tested. Simulations continued for 420 days, and each simulation scenario was repeated 50 times. Mosquito populations, entomologic inoculation rates (EIRs), probabilities of local mosquito extinction, and proportion of time when the annual EIR was reduced below one were compared between different intervention types and efficacy periods. In the village setting with clustered houses, the combinational intervention of 50% LLINs plus outdoor ATSBs significantly reduced mosquito population and EIR in short term, increased the probability of local mosquito extinction, and increased the time when annual EIR is less than one per person compared to 50% LLINs alone; outdoor ATSBs alone significantly reduced mosquito population in short term, increased the probability of mosquito extinction, and increased the time when annual EIR is less than one compared to 50% LLINs alone, but there was no significant difference in EIR in short term between 50% LLINs and outdoor ATSBs. In the village setting with dispersed houses, the

  17. Reduced material model for closed cell metal foam infiltrated with phase change material based on high resolution numerical studies

    International Nuclear Information System (INIS)

    Ohsenbrügge, Christoph; Marth, Wieland; Navarro y de Sosa, Iñaki; Drossel, Welf-Guntram; Voigt, Axel

    2016-01-01

    Highlights: • Closed cell metal foam sandwich structures were investigated. • High resolution numerical studies were conducted using CT scan data. • A reduced model for use in commercial FE software reduces needed degrees of freedom. • Thermal inertia is increased about 4 to 5 times in PCM filled structures. • The reduced material model was verified using experimental data. - Abstract: The thermal behaviour of closed cell metal foam infiltrated with paraffin wax as latent heat storage for application in high precision tool machines was examined. Aluminium foam sandwiches with metallically bound cover layers were prepared in a powder metallurgical process and cross-sectional images of the structures were generated with X-ray computed tomography. Based on the image data a three dimensional highly detailed model was derived and prepared for simulation with the adaptive FE-library AMDiS. The pores were assumed to be filled with paraffin wax. The thermal conductivity and the transient thermal behaviour in the phase-change region were investigated. Based on the results from the highly detailed simulations a reduced model for use in commercial FE-software (ANSYS) was derived. It incorporates the properties of the matrix and the phase change material into a homogenized material. A sandwich-structure with and without paraffin was investigated experimentally under constant thermal load. The results were used to verify the reduced material model in ANSYS.

  18. Feasibility Study for an Air Force Environmental Model and Data Exchange. Volume 4. Appendix G. Model Review and Index-Air Multimedia and Other Models, Plus Data Bases.

    Science.gov (United States)

    1983-07-01

    8217 ,".: , :.:: .. :.:.> ’,:,’ ;’ . i:i,-- -- ’ ..4.-,._ _ -. ._.. " .. ..-... .-. -. -= _-. Model acronym: SANGRE Model name: Nonlinear Thermal Creep of Geological...release rates, depositon and settling velocities, scaveng- ing rates, and decay constants; arrays of meat animals, dairy cattle , crop areas, and...multihit and one-hit dies- response functions applied to animal response data derived trom lifetime feeding studies. Document citations: Rai, K., and

  19. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    Science.gov (United States)

    Bordogna, Clelia María; Albano, Ezequiel V.

    2007-02-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.

  20. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    International Nuclear Information System (INIS)

    Bordogna, Clelia Maria; Albano, Ezequiel V

    2007-01-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work

  1. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  2. The Risk Assessment Study for Electric Power Marketing Competitiveness Based on Cloud Model and TOPSIS

    Science.gov (United States)

    Li, Cunbin; Wang, Yi; Lin, Shuaishuai

    2017-09-01

    With the rapid development of the energy internet and the deepening of the electric power reform, the traditional marketing mode of electric power does not apply to most of electric power enterprises, so must seek a breakthrough, however, in the face of increasingly complex marketing information, how to make a quick, reasonable transformation, makes the electric power marketing competitiveness assessment more accurate and objective becomes a big problem. In this paper, cloud model and TOPSIS method is proposed. Firstly, build the electric power marketing competitiveness evaluation index system. Then utilize the cloud model to transform the qualitative evaluation of the marketing data into quantitative values and use the entropy weight method to weaken the subjective factors of evaluation index weight. Finally, by TOPSIS method the closeness degrees of alternatives are obtained. This method provides a novel solution for the electric power marketing competitiveness evaluation. Through the case analysis the effectiveness and feasibility of this model are verified.

  3. Student Teachers' Modeling of Acceleration Using a Video-Based Laboratory in Physics Education: A Multimodal Case Study

    Directory of Open Access Journals (Sweden)

    Louis Trudel

    2016-06-01

    Full Text Available This exploratory study intends to model kinematics learning of a pair of student teachers when exposed to prescribed teaching strategies in a video-based laboratory. Two student teachers were chosen from the Francophone B.Ed. program of the Faculty of Education of a Canadian university. The study method consisted of having the participants interact with a video-based laboratory to complete two activities for learning properties of acceleration in rectilinear motion. Time limits were placed on the learning activities during which the researcher collected detailed multimodal information from the student teachers' answers to questions, the graphs they produced from experimental data, and the videos taken during the learning sessions. As a result, we describe the learning approach each one followed, the evidence of conceptual change and the difficulties they face in tackling various aspects of the accelerated motion. We then specify advantages and limits of our research and propose recommendations for further study.

  4. Comparison of Highly Resolved Model-Based Exposure Metrics for Traffic-Related Air Pollutants to Support Environmental Health Studies

    Directory of Open Access Journals (Sweden)

    Shih Ying Chang

    2015-12-01

    Full Text Available Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK, ambient on-road concentration from the Research LINE source dispersion model (R-LINE, a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93% and individual level (average bias between −10% to 95%. For pollutants with significant contribution from on-road emission (EC and NOx, the on-road based indoor metric performs the best at the population level (error less than 52%. At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%. For PM2.5, due to the relatively low contribution from on-road emission (7%, STOK-based indoor metric performs the best at both population (error below 40% and individual level (error below 25%. The results of the study will help future epidemiology studies to select appropriate exposure metric and reduce potential bias in exposure characterization.

  5. Outcompeting nitrite-oxidizing bacteria in single-stage nitrogen removal in sewage treatment plants: a model-based study.

    Science.gov (United States)

    Pérez, Julio; Lotti, Tommaso; Kleerebezem, Robbert; Picioreanu, Cristian; van Loosdrecht, Mark C M

    2014-12-01

    This model-based study investigated the mechanisms and operational window for efficient repression of nitrite oxidizing bacteria (NOB) in an autotrophic nitrogen removal process. The operation of a continuous single-stage granular sludge process was simulated for nitrogen removal from pretreated sewage at 10 °C. The effects of the residual ammonium concentration were explicitly analyzed with the model. Competition for oxygen between ammonia-oxidizing bacteria (AOB) and NOB was found to be essential for NOB repression even when the suppression of nitrite oxidation is assisted by nitrite reduction by anammox (AMX). The nitrite half-saturation coefficient of NOB and AMX proved non-sensitive for the model output. The maximum specific growth rate of AMX bacteria proved a sensitive process parameter, because higher rates would provide a competitive advantage for AMX. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Wildlife corridors based on the spatial modeling of the human pressure: A Portuguese case study

    Science.gov (United States)

    Lara Nunes; Ana Luisa Gomes; Alexandra Fonseca

    2015-01-01

    In times of economical crisis, rewilding can be a less costly conservation management approach, able to generate economic value from wild lands and to rural communities. Simultaneously, improvement of connectivity between protected areas was identified as a global priority for conservation. Allying the rewilding concept and connectivity concern, a model for...

  7. Modeling of microdevices for SAW-based acoustophoresis - A study of boundary conditions

    DEFF Research Database (Denmark)

    Skov, Nils Refstrup; Bruus, Henrik

    2016-01-01

    We present a finite-element method modeling of acoustophoretic devices consisting of a single, long, straight, water-filled microchannel surrounded by an elastic wall of either borosilicate glass (pyrex) or the elastomer polydimethylsiloxane (PDMS) and placed on top of a piezoelectric transducer...

  8. Yeast Studies Lead to a New DNA-Based Model for Research on Development | Poster

    Science.gov (United States)

    A paper from Amar J. S. Klar, Ph.D., with the RNA Biology Laboratory in NCI’s Center for Cancer Research, has identified a model for DNA research that explains the congenital disorder of mirror hand movements in humans. A mirror movement is when an intentional movement on one side of the body is mirrored by an involuntary movement on the other.

  9. Pig Models of Neurodegenerative Disorders: Utilization in Cell Replacement-Based Preclinical Safety and Efficacy Studies

    Czech Academy of Sciences Publication Activity Database

    Doležalová, D.; Hruška-Plocháň, M.; Bjarkam, C. R.; Sorensen, J. C. H.; Cunningham, M.; Weingarten, D.; Ciacci, J. D.; Juhás, Štefan; Juhásová, Jana; Motlík, Jan; Hefferan, M. P.; Hazel, T.; Johe, K.; Carromeu, C.; Muotri, A.; Bui, J. D.; Strnádel, J.; Marsala, M.

    2014-01-01

    Roč. 522, č. 12 (2014), s. 2784-2801 ISSN 0021-9967 R&D Projects: GA TA ČR(CZ) TA01011466; GA MŠk ED2.1.00/03.0124 Institutional support: RVO:67985904 Keywords : pig * neurodegenerative models * stem cells Subject RIV: FH - Neurology Impact factor: 3.225, year: 2014

  10. Developing, choosing and using landscape evolution models to inform field-based landscape reconstruction studies

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Armitage, J.; Attal, M.; Gorp, van Wouter; Coulthard, T.J.; Schoorl, J.M.

    2017-01-01

    Landscape evolution models (LEMs) are an increasingly popular resource for geomorphologists as they can operate as virtual laboratories where the implications of hypotheses about processes over human to geological timescales can be visualized at spatial scales from catchments to mountain ranges.

  11. Model For Marketing Strategy Decision Based On Multicriteria Decicion Making: A Case Study In Batik Madura Industry

    Science.gov (United States)

    Anna, I. D.; Cahyadi, I.; Yakin, A.

    2018-01-01

    Selection of marketing strategy is a prominent competitive advantage for small and medium enterprises business development. The selection process is is a multiple criteria decision-making problem, which includes evaluation of various attributes or criteria in a process of strategy formulation. The objective of this paper is to develop a model for the selection of a marketing strategy in Batik Madura industry. The current study proposes an integrated approach based on analytic network process (ANP) and technique for order preference by similarity to ideal solution (TOPSIS) to determine the best strategy for Batik Madura marketing problems. Based on the results of group decision-making technique, this study selected fourteen criteria, including consistency, cost, trend following, customer loyalty, business volume, uniqueness manpower, customer numbers, promotion, branding, bussiness network, outlet location, credibility and the inovation as Batik Madura marketing strategy evaluation criteria. A survey questionnaire developed from literature review was distributed to a sample frame of Batik Madura SMEs in Pamekasan. In the decision procedure step, expert evaluators were asked to establish the decision matrix by comparing the marketing strategy alternatives under each of the individual criteria. Then, considerations obtained from ANP and TOPSIS methods were applied to build the specific criteria constraints and range of the launch strategy in the model. The model in this study demonstrates that, under current business situation, Straight-focus marketing strategy is the best marketing strategy for Batik Madura SMEs in Pamekasan.

  12. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    Science.gov (United States)

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  13. Combining integrated river modelling and agent based social simulation for river management; The case study of the Grensmaas project

    NARCIS (Netherlands)

    Valkering, P.; Krywkow, Jorg; Rotmans, J.; van der Veen, A.; Douben, N.; van Os, A.G.

    2003-01-01

    In this paper we present a coupled Integrated River Model – Agent Based Social Simulation model (IRM-ABSS) for river management. The models represent the case of the ongoing river engineering project “Grensmaas”. In the ABSS model stakeholders are represented as computer agents negotiating a river

  14. Burnout Study of Clinical Nurses in Vietnam: Development of Job Burnout Model Based on Leiter and Maslach's Theory.

    Science.gov (United States)

    Nguyen, Huong Thi Thu; Kitaoka, Kazuyo; Sukigara, Masune; Thai, Anh Lan

    2018-03-01

    This study aimed to create a Vietnamese version of both the Maslach Burnout Inventory-General Survey (MBI-GS) and Areas of Worklife Scale (AWS) to assess the burnout state of Vietnamese clinical nurses and to develop a causal model of burnout of clinical nurses. We conducted a descriptive design using a cross-sectional survey. The questionnaire was hand divided directly by nursing departments to 500 clinical nurses in three hospitals. Vietnamese MBI-GS and AWS were then examined for reliability and validity. We used the revised exhaustion +1 burnout classification to access burnout state. We performed path analysis to develop a Vietnamese causal model based on the original model by Leiter and Maslach's theory. We found that both scales were reliable and valid for assessing burnout. Among nurse participants, the percentage of severe burnout was 0.7% and burnout was 15.8%, and 17.2% of nurses were exhausted. The best predictor of burnout was "on-duty work schedule" that clinical nurses have to work for 24 hours. In the causal model, we also found similarity and difference pathways in comparison with the original model. Vietnamese MBI-GS and AWS were applicable to research on occupational stress. Nearly one-fifth of Vietnamese clinical nurses were working in burnout state. The causal model suggested a range of factors resulting in burnout, and it is necessary to consider the specific solution to prevent burnout problem. Copyright © 2018. Published by Elsevier B.V.

  15. Energy Sustainability Evaluation Model Based on the Matter-Element Extension Method: A Case Study of Shandong Province, China

    Directory of Open Access Journals (Sweden)

    Siqi Li

    2017-11-01

    Full Text Available Energy sustainability is of vital importance to regional sustainability, because energy sustainability is closely related to both regional economic growth and social stability. The existing energy sustainability evaluation methods lack a unified system to determine the relevant influencing factors, are relatively weak in quantitative analysis, and do not fully describe the ‘paradoxical’ characteristics of energy sustainability. To solve those problems and to reasonably and objectively evaluate energy sustainability, we propose an energy sustainability evaluation model based on the matter-element extension method. We first select energy sustainability evaluation indexes based on previous research and experience. Then, a variation coefficient method is used to determine the weights of these indexes. Finally, the study establishes the classical domain, joint domain, and the matter-element relationship to evaluate energy sustainability through matter-element extension. Data from Shandong Province is used as a case study to evaluate the region’s energy sustainability. The case study shows that the proposed energy sustainability evaluation model, based on the matter-element extension method, can effectively evaluate regional energy sustainability.

  16. A model-based meta-analysis of monoclonal antibody pharmacokinetics to guide optimal first-in-human study design

    Science.gov (United States)

    Davda, Jasmine P; Dodds, Michael G; Gibbs, Megan A; Wisdom, Wendy; Gibbs, John P

    2014-01-01

    The objectives of this retrospective analysis were (1) to characterize the population pharmacokinetics (popPK) of four different monoclonal antibodies (mAbs) in a combined analysis of individual data collected during first-in-human (FIH) studies and (2) to provide a scientific rationale for prospective design of FIH studies with mAbs. The data set was composed of 171 subjects contributing a total of 2716 mAb serum concentrations, following intravenous (IV) and subcutaneous (SC) doses. mAb PK was described by an open 2-compartment model with first-order elimination from the central compartment and a depot compartment with first-order absorption. Parameter values obtained from the popPK model were further used to generate optimal sampling times for a single dose study. A robust fit to the combined data from four mAbs was obtained using the 2-compartment model. Population parameter estimates for systemic clearance and central volume of distribution were 0.20 L/day and 3.6 L with intersubject variability of 31% and 34%, respectively. The random residual error was 14%. Differences (> 2-fold) in PK parameters were not apparent across mAbs. Rich designs (22 samples/subject), minimal designs for popPK (5 samples/subject), and optimal designs for non-compartmental analysis (NCA) and popPK (10 samples/subject) were examined by stochastic simulation and estimation. Single-dose PK studies for linear mAbs executed using the optimal designs are expected to yield high-quality model estimates, and accurate capture of NCA estimations. This model-based meta-analysis has determined typical popPK values for four mAbs with linear elimination and enabled prospective optimization of FIH study designs, potentially improving the efficiency of FIH studies for this class of therapeutics. PMID:24837591

  17. A model-based approach to studying changes in compositional heterogeneity

    NARCIS (Netherlands)

    Baeten, L.; Warton, D.; Calster, van H.; Frenne, De P.; Verstraeten, G.; Bonte, D.; Bernhardt-Romermann, M.; Cornelis, R.; Decocq, G.; Eriksson, O.; Hommel, P.W.F.M.

    2014-01-01

    1. Non-random species loss and gain in local communities change the compositional heterogeneity between communities over time, which is traditionally quantified with dissimilarity-based approaches. Yet, dissimilarities summarize the multivariate species data into a univariate index and obscure the

  18. Innovative biomagnetic imaging sensors for breast cancer: A model-based study

    International Nuclear Information System (INIS)

    Deng, Y.; Golkowski, M.

    2012-01-01

    Breast cancer is a serious potential health problem for all women and is the second leading cause of cancer deaths in the United States. The current screening procedures and imaging techniques, including x-ray mammography, clinical biopsy, ultrasound imaging, and magnetic resonance imaging, provide only 73% accuracy in detecting breast cancer. This gives the impetus to explore alternate techniques for imaging the breast and detecting early stage tumors. Among the complementary methods, the noninvasive biomagnetic breast imaging is attractive and promising, because both ionizing radiation and breast compressions that the prevalent x-ray mammography suffers from are avoided. It furthermore offers very high contrast because of the significant electromagnetic properties' differences between the cancerous, benign, and normal breast tissues. In this paper, a hybrid and accurate modeling tool for biomagnetic breast imaging is developed, which couples the electromagnetic and ultrasonic energies, and initial validations between the model predication and experimental findings are conducted.

  19. Developing an Agent-Based Model to Simulate Urban Land-Use Expansion (Case Study: Qazvin)

    OpenAIRE

    F. Nourian; A. A. Alesheikh; F. Hosseinali

    2012-01-01

    Extended abstract1-IntroductionUrban land-use expansion is a challenging issue in developing countries. Increases in population as well as the immigration from the villages to the cities are the two major factors for that phenomenon. Those factors have reduced the influence of efforts that try to limit the cities’ boundaries. Thus, spatial planners always look for the models that simulate the expansion of urban land-uses and enable them to prevent unbalanced expansions of cities and guide the...

  20. Short-Term Wind Speed Hybrid Forecasting Model Based on Bias Correcting Study and Its Application

    OpenAIRE

    Mingfei Niu; Shaolong Sun; Jie Wu; Yuanlei Zhang

    2015-01-01

    The accuracy of wind speed forecasting is becoming increasingly important to improve and optimize renewable wind power generation. In particular, reliable short-term wind speed forecasting can enable model predictive control of wind turbines and real-time optimization of wind farm operation. However, due to the strong stochastic nature and dynamic uncertainty of wind speed, the forecasting of wind speed data using different patterns is difficult. This paper proposes a novel combination bias c...

  1. Study on an ISO 15926 based data modeling methodology for nuclear power industry

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Yang Ho; Park, Byeong Ho; Park, Seong Chan; Kim, Eun Kee [KEPCO E-C, Yongin (Korea, Republic of)

    2014-10-15

    The scope is therefore data integration and data to support the whole life of a plant. This representation is specified by a generic, conceptual Data Model (DM) that is independent of any particular application, but that is able to record data from the applications used in plant design, fabrication and operation. The data model is designed to be used in conjunction with Reference Data (RD): standard instances of the DM that represent information common to a number of users, plants, or both. This paper introduces a high level description of the structure of ISO 15926 and how this can be adapted to the nuclear power plant industry in particular. This paper introduces ISO 15926 methodology and how to extend the existing RDL for nuclear power industry. As the ISO 15926 representation is independent of applications, interfaces to existing or future applications have to be developed. Such interfaces are provided by Templates that takes input from external sources and 'lifts' it into an ISO 15926 repository, and/or 'lowers' the data into other applications. This is a similar process to the process defined by W3C. Data exchange can be done using e.g. XML messages, but the modelling is independent of technology used for the exchange.

  2. Case Study for the Return on Investment of Internet of Things Using Agent-Based Modelling and Data Science

    Directory of Open Access Journals (Sweden)

    Charles Houston

    2017-01-01

    Full Text Available As technology advances towards new paradigms such as the Internet of Things, there is a desire among business leaders for a reliable method to determine the value of supporting these ventures. Traditional simulation and analysis techniques cannot model the complex systems inherent in fields such as infrastructure asset management, or suffer from a lack of data on which to build a prediction. Agent-based modelling, through an integration with data science, presents an attractive simulation method to capture these underlying complexities and provide a solution. The aim of this work is to investigate this integration as a refined process for answering practical business questions. A specific case study is addressed to assess the return on investment of installing condition monitoring sensors on lift assets in a London Underground station. An agent-based model is developed for this purpose, supported by analysis from historical data. The simulation results demonstrate how returns can be achieved and highlight features induced as a result of stochasticity in the model. Suggestions of future research paths are additionally outlined.

  3. Testing the Community-Based Learning Collaborative (CBLC) implementation model: a study protocol

    OpenAIRE

    Hanson, Rochelle F.; Schoenwald, Sonja; Saunders, Benjamin E.; Chapman, Jason; Palinkas, Lawrence A.; Moreland, Angela D.; Dopp, Alex

    2016-01-01

    Background High rates of youth exposure to violence, either through direct victimization or witnessing, result in significant health/mental health consequences and high associated lifetime costs. Evidence-based treatments (EBTs), such as Trauma-Focused Cognitive Behavioral Therapy (TF-CBT), can prevent and/or reduce these negative effects, yet these treatments are not standard practice for therapists working with children identified by child welfare or mental health systems as needing service...

  4. Tourism Based on the Model of Strategic Place Triangle (A Case Study in Wisata Bahari Lamongan

    Directory of Open Access Journals (Sweden)

    Ismuhadi Heru Wijayanto

    2014-09-01

    Full Text Available Tourism is a very promising prospects to support the economic development of a region. We assess Wisata Bahari Lamongan (WBL as a tourism object. WBL was having significant decreased visitors in the last six years, thus a strategic plan is need to re-increase the number of visitors. Thisstudy used Strategic Triangle Place (SPT model as an analysis tool to assess the strategic planning in WBL. SPT is an approach that includes three key components, namely: Positioning, Differentiation and Brand (PDB. This research used Soft System Methodology (SSM which relevant to analyzethe strategic plan model which is expected to be recommendations from tourists on problems in WBL.The result of this showed that the strategic planning of WBL did not completely accordance with the SPT model. Positioning and differentiation elements were still weak, and the brand was not well developed. Therefore, we recommend WBL to set targeted segmentation of all social backgrounds and ages, especially children. WBL shouldmade ​​the sea as main differentiated factor, thus it has marine tourism potential. WBL should build an image by providing best service quality, security, comfortness, cleanliness, and best quality rides. Keywords: Strategic Planning, Strategic Place Triangle, Positioning, Differentiation, Brand

  5. A GIS based transportation model for solid waste disposal - A case study on Asansol municipality

    International Nuclear Information System (INIS)

    Ghose, M.K.; Dikshit, A.K.; Sharma, S.K.

    2006-01-01

    Uncontrolled growth of the urban population in developing countries in recent years has made solid waste management an important issue. Very often, a substantial amount of total expenditures is spent on the collection of solid waste by city authorities. Optimization of the routing system for collection and transport of solid waste thus constitutes an important component of an effective solid waste management system. This paper describes an attempt to design and develop an appropriate storage, collection and disposal plan for the Asansol Municipality Corporation (AMC) of West Bengal State (India). A GIS optimal routing model is proposed to determine the minimum cost/distance efficient collection paths for transporting the solid wastes to the landfill. The model uses information on population density, waste generation capacity, road network and the types of road, storage bins and collection vehicles, etc. The proposed model can be used as a decision support tool by municipal authorities for efficient management of the daily operations for transporting solid wastes, load balancing within vehicles, managing fuel consumption and generating work schedules for the workers and vehicles. The total cost of the proposed collection systems is estimated to be around 80 million rupees for the fixed cost of storage bins, collection vehicles and a sanitary landfill and around 8.4 million rupees for the annual operating cost of crews, vehicles and landfill maintenance. A substantial amount (25 million rupees/yr) is currently being spent by AMC on waste collection alone without any proper storage/collection system and sanitary landfill. Over a projected period of 15 yr, the overall savings is thus very significant

  6. Humidification of base flow gas during adult high-frequency oscillatory ventilation: an experimental study using a lung model.

    Science.gov (United States)

    Shiba, Naoki; Nagano, Osamu; Hirayama, Takahiro; Ichiba, Shingo; Ujike, Yoshihito

    2012-01-01

    In adult high-frequency oscillatory ventilation (HFOV) with an R100 artificial ventilator, exhaled gas from patient's lung may warm the temperature probe and thereby disturb the humidification of base flow (BF) gas. We measured the humidity of BF gas during HFOV with frequencies of 6, 8 and 10 Hz, maximum stroke volumes (SV) of 285, 205, and 160 ml at the respective frequencies, and, BFs of 20, 30, 40 l/min using an original lung model. The R100 device was equipped with a heated humidifier, Hummax Ⅱ, consisting of a porous hollow fiber in circuit. A 50-cm length of circuit was added between temperature probe (located at 50 cm proximal from Y-piece) and the hollow fiber. The lung model was made of a plastic container and a circuit equipped with another Hummax Ⅱ. The lung model temperature was controlled at 37℃. The Hummax Ⅱ of the R100 was inactivated in study-1 and was set at 35℃ or 37℃ in study-2. The humidity was measured at the distal end of the added circuit in study-1 and at the proximal end in study-2. In study-1, humidity was detected at 6 Hz (SV 285 ml) and BF 20 l/min, indicating the direct reach of the exhaled gas from the lung model to the temperature probe. In study-2 the absolute humidity of the BF gas decreased by increasing SV and by increasing BF and it was low with setting of 35℃. In this study setting, increasing the SV induced significant reduction of humidification of the BF gas during HFOV with R100.

  7. Model-based development and testing of advertising messages: A comparative study of two campaign proposals based on the MECCAS model and a conventional approach

    DEFF Research Database (Denmark)

    Bech-Larsen, Tino

    theoretically valid and comprehensible guidelines for message development potentially enhances the effects of advertising messages and improves the possibility of measuring such effects. Moreover, such guidelines also have potential implications for the managerial communication processes (client......1. Traditionally the development of advertising messages has been based on "creative independence", sometimes catalysed by inductively generated empirical data. Due to the recent intensified focus on advertising effectiveness, this state of affair is now beginning to change. 2. Implementing......-agency and intra-agency) involved in the development of advertising messages. 3. The purpose of the study described in this paper is to compare the development and effects of two campaign proposals, with the common aim of increasing the consumption of apples among young Danes (18 to 35 years of age). One...

  8. Innovations on a shoestring: a study of a collaborative community-based Aboriginal mental health service model in rural Canada

    Directory of Open Access Journals (Sweden)

    Graham Douglas

    2009-12-01

    Full Text Available Abstract Background Collaborative, culturally safe services that integrate clinical approaches with traditional Aboriginal healing have been hailed as promising approaches to ameliorate the high rates of mental health problems in Aboriginal communities in Canada. Overcoming significant financial and human resources barriers, a mental health team in northern Ontario is beginning to realize this ideal. We studied the strategies, strengths and challenges related to collaborative Aboriginal mental health care. Methods A participatory action research approach was employed to evaluate the Knaw Chi Ge Win services and their place in the broader mental health system. Qualitative methods were used as the primary source of data collection and included document review, ethnographic interviews with 15 providers and 23 clients; and 3 focus groups with community workers and managers. Results The Knaw Chi Ge Win model is an innovative, community-based Aboriginal mental health care model that has led to various improvements in care in a challenging rural, high needs environment. Formal opportunities to share information, shared protocols and ongoing education support this model of collaborative care. Positive outcomes associated with this model include improved quality of care, cultural safety, and integration of traditional Aboriginal healing with clinical approaches. Ongoing challenges include chronic lack of resources, health information and the still cursory understanding of Aboriginal healing and outcomes. Conclusions This model can serve to inform collaborative care in other rural and Indigenous mental health systems. Further research into traditional Aboriginal approaches to mental health is needed to continue advances in collaborative practice in a clinical setting.

  9. Theoretical study of two-dimensional phononic crystals with viscoelasticity based on fractional derivative models

    International Nuclear Information System (INIS)

    Liu Yaozong; Yu Dianlong; Zhao Honggang; Wen Jihong; Wen Xisen

    2008-01-01

    Wave propagation in two-dimensional phononic crystals (PCs) with viscoelasticity is investigated using a finite-difference-time-domain (FDTD) method. The viscoelasticity is evaluated using the Kelvin-Voigt model with fractional derivatives (FDs) so that both the dispersion and dissipation are considered. Numerical approximation of FDs is integrated into the FDTD scheme to simulate wave propagation in such PCs. All the constituent materials are treated as isotropic and homogeneous. The gaps are substantially displaced and widened and the attenuation is noticeably enhanced due to the dispersion and dissipation of host material and the complicated multiple scattering between scatterers. These results indicate that the viscoelasticity of the damping host has significant influence on wave propagation in PCs and should be considered

  10. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study.

    Science.gov (United States)

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-11-01

    Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995-2012), the Netherlands (2006-12), Italy (1995-2004), Scotland (1989-98), and England (2002-09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r(2)=0·95, Ireland r(2)=0·99, Italy r(2)=0·95, the Netherlands r(2)=0·99, and Scotland r(2)=0·97; overall r(2)=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5-5·0), with similar estimates for men (4·6, 4·3-4·9) and women (5·0, 4·5-5·5). A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The Netherlands Organisation for Health Research and Development (ZonMw); the

  11. Study of the Entrepreneurship in Universities as Learning Organization Based on Senge Model

    Science.gov (United States)

    Nejad, Bahareh Azizi; Abbaszadeh, Mir Mohammad Seiied; Hassani, Mohammad; Bernousi, Iraj

    2012-01-01

    Learning organization and entrepreneurship are the most important issues that are focused on different themes in management. The purpose of present research was to study the relationship between learning organization elements and entrepreneurship among academic faculty members of the West Azarbaijan State Universities. The research method was…

  12. A Longitudinal Study of a 5th Grade Science Curriculum Based on the 5E Model

    Science.gov (United States)

    Scott, Timothy P.; Schroeder, Carolyn; Tolson, Homer; Huang, Tse-Yang; Williams, Omah M.

    2014-01-01

    The Center for Mathematics and Science Education at Texas A&M University contracted with Region 4 Education Service Center (ESC) and a large, diverse school district to conduct a longitudinal study from 2005-2009. The state achievement test scores of 5th graders who were taught using a Grade 5 science textbook designed by Region 4 ESC were…

  13. Evaluating Teleworkers' Acceptance of Mobile Technology: A Study Based on the Utaut Model

    Science.gov (United States)

    Mills, Jamia Sharie

    2016-01-01

    Mobile technology has provided flexible methods for employees to complete work-related tasks without being tied to an office. Research has predicted the level of training on mobile technology may impact a user's ability to complete work responsibilities accurately. This study intended to examine what behavior factors from the unified theory of…

  14. A Study towards Building An Optimal Graph Theory Based Model For The Design of Tourism Website

    Science.gov (United States)

    Panigrahi, Goutam; Das, Anirban; Basu, Kajla

    2010-10-01

    Effective tourism website is a key to attract tourists from different parts of the world. Here we identify the factors of improving the effectiveness of website by considering it as a graph, where web pages including homepage are the nodes and hyperlinks are the edges between the nodes. In this model, the design constraints for building a tourism website are taken into consideration. Our objectives are to build a framework of an effective tourism website providing adequate level of information, service and also to enable the users to reach to the desired page by spending minimal loading time. In this paper an information hierarchy specifying the upper limit of outgoing link of a page has also been proposed. Following the hierarchy, the web developer can prepare an effective tourism website. Here loading time depends on page size and network traffic. We have assumed network traffic as uniform and the loading time is directly proportional with page size. This approach is done by quantifying the link structure of a tourism website. In this approach we also propose a page size distribution pattern of a tourism website.

  15. A study of internal energy relaxation in shocks using molecular dynamics based models

    International Nuclear Information System (INIS)

    Li, Zheng; Parsons, Neal; Levin, Deborah A.

    2015-01-01

    Recent potential energy surfaces (PESs) for the N 2 + N and N 2 + N 2 systems are used in molecular dynamics (MD) to simulate rates of vibrational and rotational relaxations for conditions that occur in hypersonic flows. For both chemical systems, it is found that the rotational relaxation number increases with the translational temperature and decreases as the rotational temperature approaches the translational temperature. The vibrational relaxation number is observed to decrease with translational temperature and approaches the rotational relaxation number in the high temperature region. The rotational and vibrational relaxation numbers are generally larger in the N 2 + N 2 system. MD-quasi-classical trajectory (QCT) with the PESs is also used to calculate the V-T transition cross sections, the collision cross section, and the dissociation cross section for each collision pair. Direct simulation Monte Carlo (DSMC) results for hypersonic flow over a blunt body with the total collision cross section from MD/QCT simulations, Larsen-Borgnakke with new relaxation numbers, and the N 2 dissociation rate from MD/QCT show a profile with a decreased translational temperature and a rotational temperature close to vibrational temperature. The results demonstrate that many of the physical models employed in DSMC should be revised as fundamental potential energy surfaces suitable for high temperature conditions become available

  16. Nursing students learning the pharmacology of diabetes mellitus with complexity-based computerized models: A quasi-experimental study.

    Science.gov (United States)

    Dubovi, Ilana; Dagan, Efrat; Sader Mazbar, Ola; Nassar, Laila; Levy, Sharona T

    2018-02-01

    Pharmacology is a crucial component of medications administration in nursing, yet nursing students generally find it difficult and self-rate their pharmacology skills as low. To evaluate nursing students learning pharmacology with the Pharmacology Inter-Leaved Learning-Cells environment, a novel approach to modeling biochemical interactions using a multiscale, computer-based model with a complexity perspective based on a small set of entities and simple rules. This environment represents molecules, organelles and cells to enhance the understanding of cellular processes, and combines these cells at a higher scale to obtain whole-body interactions. Sophomore nursing students who learned the pharmacology of diabetes mellitus with the Pharmacology Inter-Leaved Learning-Cells environment (experimental group; n=94) or via a lecture-based curriculum (comparison group; n=54). A quasi-experimental pre- and post-test design was conducted. The Pharmacology-Diabetes-Mellitus questionnaire and the course's final exam were used to evaluate students' knowledge of the pharmacology of diabetes mellitus. Conceptual learning was significantly higher for the experimental than for the comparison group for the course final exam scores (unpaired t=-3.8, pLearning with complexity-based computerized models is highly effective and enhances the understanding of moving between micro and macro levels of the biochemical phenomena, this is then related to better understanding of medication actions. Moreover, the Pharmacology Inter-Leaved Learning-Cells approach provides a more general reasoning scheme for biochemical processes, which enhances pharmacology learning beyond the specific topic learned. The present study implies that deeper understanding of pharmacology will support nursing students' clinical decisions and empower their proficiency in medications administration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Model Servqual Rule Base Asean University Network untuk Penilaian Kualitas Program Studi

    Directory of Open Access Journals (Sweden)

    Esti Wijayanti

    2016-05-01

    Full Text Available As well known that AUN (Asean University Network.AUN and ABET (Accreditation Boardb for Enginnering and Technology are non-profit organitatinon which have. AUN (Asean University Network were using variable with refer to AUN’s criteria’s there consist of fifteen which are: Expected Learning Outcomes, Programme Specification, Programme Structure and Content, Teaching and Learning Strategy, Student Assessment, Academic Staff Quality, Support Staff Quality, Student Quality, Student Advice and Support, Facilities and Infrastructure, Quality Assurance of Teaching/Learning Process, Staff Development Activities, Stakeholders Feedback, Output, Stakeholders Satisfaction,and adopted score's scale 7. In there here, we discuss the fifteen AUN’s of AUN in the criterias. There servqual of as can be into five dimensions, assurance, empathy, responsive, reliability and facilty in order to make the assessment's process easier. This research outcome indicated that this proposed method can be used to evaluate an education program. The validation result by using AUN's data and the analysis of servqual rule base Asean University Network almost have the same pattern with correlation value is 0,985 and this is can be accepted because its validity have reach 97%.

  18. Dynamics of sustained use and abandonment of clean cooking systems: study protocol for community-based system dynamics modeling.

    Science.gov (United States)

    Kumar, Praveen; Chalise, Nishesh; Yadama, Gautam N

    2016-04-26

    More than 3 billion of the world's population are affected by household air pollution from relying on unprocessed solid fuels for heating and cooking. Household air pollution is harmful to human health, climate, and environment. Sustained uptake and use of cleaner cooking technologies and fuels are proposed as solutions to this problem. In this paper, we present our study protocol aimed at understanding multiple interacting feedback mechanisms involved in the dynamic behavior between social, ecological, and technological systems driving sustained use or abandonment of cleaner cooking technologies among the rural poor in India. This study uses a comparative case study design to understand the dynamics of sustained use or abandonment of cleaner cooking technologies and fuels in four rural communities of Rajasthan, India. The study adopts a community based system dynamics modeling approach. We describe our approach of using community based system dynamics with rural communities to delineate the feedback mechanisms involved in the uptake and sustainment of clean cooking technologies. We develop a reference mode with communities showing the trend over time of use or abandonment of cleaner cooking technologies and fuels in these communities. Subsequently, the study develops a system dynamics model with communities to understand the complex sub-systems driving the behavior in these communities as reflected in the reference mode. We use group model building techniques to facilitate participation of relevant stakeholders in the four communities and elicit a narrative describing the feedback mechanisms underlying sustained adoption or abandonment of cleaner cooking technologies. In understanding the dynamics of feedback mechanisms in the uptake and exclusive use of cleaner cooking systems, we increase the likelihood of dissemination and implementation of efficacious interventions into everyday settings to improve the health and wellbeing of women and children most affected

  19. AlgiMatrix™ based 3D cell culture system as an in-vitro tumor model for anticancer studies.

    Directory of Open Access Journals (Sweden)

    Chandraiah Godugu

    Full Text Available Three-dimensional (3D in-vitro cultures are recognized for recapitulating the physiological microenvironment and exhibiting high concordance with in-vivo conditions. Taking the advantages of 3D culture, we have developed the in-vitro tumor model for anticancer drug screening.Cancer cells grown in 6 and 96 well AlgiMatrix™ scaffolds resulted in the formation of multicellular spheroids in the size range of 100-300 µm. Spheroids were grown in two weeks in cultures without compromising the growth characteristics. Different marketed anticancer drugs were screened by incubating them for 24 h at 7, 9 and 11 days in 3D cultures and cytotoxicity was measured by AlamarBlue® assay. Effectiveness of anticancer drug treatments were measured based on spheroid number and size distribution. Evaluation of apoptotic and anti-apoptotic markers was done by immunohistochemistry and RT-PCR. The 3D results were compared with the conventional 2D monolayer cultures. Cellular uptake studies for drug (Doxorubicin and nanoparticle (NLC were done using spheroids.IC(50 values for anticancer drugs were significantly higher in AlgiMatrix™ systems compared to 2D culture models. The cleaved caspase-3 expression was significantly decreased (2.09 and 2.47 folds respectively for 5-Fluorouracil and Camptothecin in H460 spheroid cultures compared to 2D culture system. The cytotoxicity, spheroid size distribution, immunohistochemistry, RT-PCR and nanoparticle penetration data suggested that in vitro tumor models show higher resistance to anticancer drugs and supporting the fact that 3D culture is a better model for the cytotoxic evaluation of anticancer drugs in vitro.The results from our studies are useful to develop a high throughput in vitro tumor model to study the effect of various anticancer agents and various molecular pathways affected by the anticancer drugs and formulations.

  20. Experiences of Community-Living Older Adults Receiving Integrated Care Based on the Chronic Care Model: A Qualitative Study.

    Science.gov (United States)

    Spoorenberg, Sophie L W; Wynia, Klaske; Fokkens, Andrea S; Slotman, Karin; Kremer, Hubertus P H; Reijneveld, Sijmen A

    2015-01-01

    Integrated care models aim to solve the problem of fragmented and poorly coordinated care in current healthcare systems. These models aim to be patient-centered by providing continuous and coordinated care and by considering the needs and preferences of patients. The objective of this study was to evaluate the opinions and experiences of community-living older adults with regard to integrated care and support, along with the extent to which it meets their health and social needs. Semi-structured interviews were conducted with 23 older adults receiving integrated care and support through "Embrace," an integrated care model for community-living older adults that is based on the Chronic Care Model and a population health management model. Embrace is currently fully operational in the northern region of the Netherlands. Data analysis was based on the grounded theory approach. Responses of participants concerned two focus areas: 1) Experiences with aging, with the themes "Struggling with health," "Increasing dependency," "Decreasing social interaction," "Loss of control," and "Fears;" and 2) Experiences with Embrace, with the themes "Relationship with the case manager," "Interactions," and "Feeling in control, safe, and secure". The prospect of becoming dependent and losing control was a key concept in the lives of the older adults interviewed. Embrace reinforced the participants' ability to stay in control, even if they were dependent on others. Furthermore, participants felt safe and secure, in contrast to the fears of increasing dependency within the standard care system. The results indicate that integrated care and support provided through Embrace met the health and social needs of older adults, who were coping with the consequences of aging.

  1. Study of cumulative fatigue damage detection for used parts with nonlinear output frequency response functions based on NARMAX modelling

    Science.gov (United States)

    Huang, Honglan; Mao, Hanying; Mao, Hanling; Zheng, Weixue; Huang, Zhenfeng; Li, Xinxin; Wang, Xianghong

    2017-12-01

    Cumulative fatigue damage detection for used parts plays a key role in the process of remanufacturing engineering and is related to the service safety of the remanufactured parts. In light of the nonlinear properties of used parts caused by cumulative fatigue damage, the based nonlinear output frequency response functions detection approach offers a breakthrough to solve this key problem. First, a modified PSO-adaptive lasso algorithm is introduced to improve the accuracy of the NARMAX model under impulse hammer excitation, and then, an effective new algorithm is derived to estimate the nonlinear output frequency response functions under rectangular pulse excitation, and a based nonlinear output frequency response functions index is introduced to detect the cumulative fatigue damage in used parts. Then, a novel damage detection approach that integrates the NARMAX model and the rectangular pulse is proposed for nonlinear output frequency response functions identification and cumulative fatigue damage detection of used parts. Finally, experimental studies of fatigued plate specimens and used connecting rod parts are conducted to verify the validity of the novel approach. The obtained results reveal that the new approach can detect cumulative fatigue damages of used parts effectively and efficiently and that the various values of the based nonlinear output frequency response functions index can be used to detect the different fatigue damages or working time. Since the proposed new approach can extract nonlinear properties of systems by only a single excitation of the inspected system, it shows great promise for use in remanufacturing engineering applications.

  2. Case study method and problem-based learning: utilizing the pedagogical model of progressive complexity in nursing education.

    Science.gov (United States)

    McMahon, Michelle A; Christopher, Kimberly A

    2011-08-19

    As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.

  3. Biological methanation of hydrogen within biogas plants: A model-based feasibility study

    International Nuclear Information System (INIS)

    Bensmann, A.; Hanke-Rauschenbach, R.; Heyer, R.; Kohrs, F.; Benndorf, D.; Reichl, U.; Sundmacher, K.

    2014-01-01

    Highlights: • Simulation study about direct methanation of hydrogen within biogas plants. • In stationary operation two limitations, namely biological and transfer limit. • Biological limit at 4m H2 3 /m CO2 3 due to stoichiometry. • Dynamic behaviour shows three qualitatively different step responses. • A simple control scheme to meet the output quality was developed. - Abstract: One option to utilize excess electric energy is its conversion to hydrogen and the subsequent methanation. An alternative to the classical chemical Sabatier process is the biological methanation (methanogenesis) within biogas plants. In conventional biogas plants methane and carbon dioxide is produced. The latter can be directly converted to methane by feeding hydrogen into the reactor, since hydrogenotrophic bacteria are present. In the present contribution, a comprehensive simulation study with respect to stationary operating conditions and disturbances is presented. It reveals two qualitative different limitations, namely a biological limit (appr. at 4m H2 3 /m CO2 3 corresponds to 4.2m H2,STP 3 /m liq 3 /d) as well as a transfer limit. A parameter region for a safe operation was defined. The temporary operation with stationary unfeasible conditions was analysed and thereby three qualitatively different disturbances can be distinguished. In one of these the operation for several days is possible. On the basis of these results, a controller was proposed and tested that meets the demands on the conversion of hydrogen and also prevents the washout of the microbial community due to hydrogen overload

  4. Growth dependence of conjugation explains limited plasmid invasion in biofilms: an individual‐based modelling study

    DEFF Research Database (Denmark)

    Merkey, Brian; Lardon, Laurent; Seoane, Jose Miguel

    2011-01-01

    Plasmid invasion in biofilms is often surprisingly limited in spite of the close contact of cells in a biofilm. We hypothesized that this poor plasmid spread into deeper biofilm layers is caused by a dependence of conjugation on the growth rate (relative to the maximum growth rate) of the donor......, we find that invasion of a resident biofilm is indeed limited when plasmid transfer depends on growth, but not so in the absence of growth dependence. Using sensitivity analysis we also find that parameters related to timing (i.e. a lag before the transconjugant can transfer, transfer proficiency...... and scan speed) and spatial reach (EPS yield, conjugal pilus length) are more important for successful plasmid invasion than the recipients' growth rate or the probability of segregational loss. While this study identifies one factor that can limit plasmid invasion in biofilms, the new individual...

  5. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  6. A study of Ground Source Heat Pump based on a heat infiltrates coupling model established with FEFLOW

    Science.gov (United States)

    Chen, H.; Hu, C.; Chen, G.; Zhang, Q.

    2017-12-01

    Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. it is vital that engineers acquire a proper understanding about the Ground Source Heat Pump (GSHP). In this study, the model of the borehole exchanger under conduction manners and heat infiltrates coupling manners was established with FEFLOW. The energy efficiency, heat transfer endurance and heat transfer in the unit depth were introduced to quantify the energy efficient and the endurance period. The performance of a the Borehole Exchanger (BHE) in soil with and without groundwater seepage was analyzed of heat transfer process between the soil and the working fluid. Basing on the model, the varied regularity of energy efficiency performance an heat transfer endurance with the conditions including the different configuration of the BHE, the soil properties, thermal load characteristic were discussed. Focus on the heat transfer process in multi-layer soil which one layer exist groundwater flow. And an investigation about thermal dispersivity was also analyzed its influence on heat transfer performance. The final result proves that the model of heat infiltrates coupling model established in this context is reasonable, which can be applied to engineering design.

  7. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    Science.gov (United States)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  8. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries.

    Science.gov (United States)

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-09-01

    Individual and organizational factors are the factors influencing traumatic occupational injuries. The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries' severity (P accidents' severity in large construction industries.

  9. Burnout Study of Clinical Nurses in Vietnam: Development of Job Burnout Model Based on Leiter and Maslach's Theory

    Directory of Open Access Journals (Sweden)

    Huong Thi Thu Nguyen, RN, MPH

    2018-03-01

    Full Text Available Purpose: This study aimed to create a Vietnamese version of both the Maslach Burnout Inventory-General Survey (MBI-GS and Areas of Worklife Scale (AWS to assess the burnout state of Vietnamese clinical nurses and to develop a causal model of burnout of clinical nurses. Methods: We conducted a descriptive design using a cross-sectional survey. The questionnaire was hand divided directly by nursing departments to 500 clinical nurses in three hospitals. Vietnamese MBI-GS and AWS were then examined for reliability and validity. We used the revised exhaustion +1 burnout classification to access burnout state. We performed path analysis to develop a Vietnamese causal model based on the original model by Leiter and Maslach's theory. Results: We found that both scales were reliable and valid for assessing burnout. Among nurse participants, the percentage of severe burnout was 0.7% and burnout was 15.8%, and 17.2% of nurses were exhausted. The best predictor of burnout was “on-duty work schedule” that clinical nurses have to work for 24 hours. In the causal model, we also found similarity and difference pathways in comparison with the original model. Conclusion: Vietnamese MBI-GS and AWS were applicable to research on occupational stress. Nearly one-fifth of Vietnamese clinical nurses were working in burnout state. The causal model suggested a range of factors resulting in burnout, and it is necessary to consider the specific solution to prevent burnout problem. Keywords: burnout, nurses, Vietnam

  10. Development of a risk prediction model for lung cancer: The Japan Public Health Center-based Prospective Study.

    Science.gov (United States)

    Charvat, Hadrien; Sasazuki, Shizuka; Shimazu, Taichi; Budhathoki, Sanjeev; Inoue, Manami; Iwasaki, Motoki; Sawada, Norie; Yamaji, Taiki; Tsugane, Shoichiro

    2018-03-01

    Although the impact of tobacco consumption on the occurrence of lung cancer is well-established, risk estimation could be improved by risk prediction models that consider various smoking habits, such as quantity, duration, and time since quitting. We constructed a risk prediction model using a population of 59 161 individuals from the Japan Public Health Center (JPHC) Study Cohort II. A parametric survival model was used to assess the impact of age, gender, and smoking-related factors (cumulative smoking intensity measured in pack-years, age at initiation, and time since cessation). Ten-year cumulative probability of lung cancer occurrence estimates were calculated with consideration of the competing risk of death from other causes. Finally, the model was externally validated using 47 501 individuals from JPHC Study Cohort I. A total of 1210 cases of lung cancer occurred during 986 408 person-years of follow-up. We found a dose-dependent effect of tobacco consumption with hazard ratios for current smokers ranging from 3.78 (2.00-7.16) for cumulative consumption ≤15 pack-years to 15.80 (9.67-25.79) for >75 pack-years. Risk decreased with time since cessation. Ten-year cumulative probability of lung cancer occurrence estimates ranged from 0.04% to 11.14% in men and 0.07% to 6.55% in women. The model showed good predictive performance regarding discrimination (cross-validated c-index = 0.793) and calibration (cross-validated χ 2 = 6.60; P-value = .58). The model still showed good discrimination in the external validation population (c-index = 0.772). In conclusion, we developed a prediction model to estimate the probability of developing lung cancer based on age, gender, and tobacco consumption. This model appears useful in encouraging high-risk individuals to quit smoking and undergo increased surveillance. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  11. Application of a Microstructure-Based ISV Plasticity Damage Model to Study Penetration Mechanics of Metals and Validation through Penetration Study of Aluminum

    Directory of Open Access Journals (Sweden)

    Yangqing Dou

    2017-01-01

    Full Text Available A developed microstructure-based internal state variable (ISV plasticity damage model is for the first time used for simulating penetration mechanics of aluminum to find out its penetration properties. The ISV damage model tries to explain the interplay between physics at different length scales that governs the failure and damage mechanisms of materials by linking the macroscopic failure and damage behavior of the materials with their micromechanical performance, such as void nucleation, growth, and coalescence. Within the continuum modeling framework, microstructural features of materials are represented using a set of ISVs, and rate equations are employed to depict damage history and evolution of the materials. For experimental calibration of this damage model, compression, tension, and torsion straining conditions are considered to distinguish damage evolutions under different stress states. To demonstrate the reliability of the presented ISV model, that model is applied for studying penetration mechanics of aluminum and the numerical results are validated by comparing with simulation results yielded from the Johnson-Cook model as well as analytical results calculated from an existing theoretical model.

  12. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Students’ Perceptions About Learning Environment of a Distance Course Based on Technology Acceptance Model: A Descriptive Study

    Directory of Open Access Journals (Sweden)

    Erman UZUN

    2013-03-01

    Full Text Available Technology Acceptance Model (TAM is a measure to assess the underlying reasons about the use of a technology. In this study an extended version of TAM were used. This extended version composed of three factors. These are “perceived motivation towards learning environment”, “perceived usefulness” and “perceived ease of use”. In this study, the learning environment of a distance course was investigated to see students’ perceptions. This distance course was delivered from one university to the other university via video-conferencing with ITL Learning Gateway content management system during the whole semester. The participants were the 32 first year vocational higher education institution students. The descriptive findings revealed that each factor of TAM perceived by students as having moderate advantages. It is believed that the underlying reason of this situation was based on the students’ low computer competency and e-learning experiences.

  14. Out of the net: An agent-based model to study human movements influence on local-scale malaria transmission.

    Directory of Open Access Journals (Sweden)

    Francesco Pizzitutti

    Full Text Available Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.

  15. Out of the net: An agent-based model to study human movements influence on local-scale malaria transmission.

    Science.gov (United States)

    Pizzitutti, Francesco; Pan, William; Feingold, Beth; Zaitchik, Ben; Álvarez, Carlos A; Mena, Carlos F

    2018-01-01

    Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.

  16. 2008 GEM Modeling Challenge: Metrics Study of the Dst Index in Physics-Based Magnetosphere and Ring Current Models and in Statistical and Analytic Specifications

    Science.gov (United States)

    Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.; hide

    2011-01-01

    In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).

  17. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  19. Clinical, laboratory, and demographic determinants of hospitalization due to dengue in 7613 patients: A retrospective study based on hierarchical models.

    Science.gov (United States)

    da Silva, Natal Santos; Undurraga, Eduardo A; da Silva Ferreira, Elis Regina; Estofolete, Cássia Fernanda; Nogueira, Maurício Lacerda

    2018-01-01

    In Brazil, the incidence of hospitalization due to dengue, as an indicator of severity, has drastically increased since 1998. The objective of our study was to identify risk factors associated with subsequent hospitalization related to dengue. We analyzed 7613 dengue confirmed via serology (ELISA), non-structural protein 1, or polymerase chain reaction amplification. We used a hierarchical framework to generate a multivariate logistic regression based on a variety of risk variables. This was followed by multiple statistical analyses to assess hierarchical model accuracy, variance, goodness of fit, and whether or not this model reliably represented the population. The final model, which included age, sex, ethnicity, previous dengue infection, hemorrhagic manifestations, plasma leakage, and organ failure, showed that all measured parameters, with the exception of previous dengue, were statistically significant. The presence of organ failure was associated with the highest risk of subsequent dengue hospitalization (OR=5·75; CI=3·53-9·37). Therefore, plasma leakage and organ failure were the main indicators of hospitalization due to dengue, although other variables of minor importance should also be considered to refer dengue patients to hospital treatment, which may lead to a reduction in avoidable deaths as well as costs related to dengue. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Hydroxyethylamine derivatives as HIV-1 protease inhibitors: a predictive QSAR modelling study based on Monte Carlo optimization.

    Science.gov (United States)

    Bhargava, S; Adhikari, N; Amin, S A; Das, K; Gayen, S; Jha, T

    2017-12-01

    Application of HIV-1 protease inhibitors (as an anti-HIV regimen) may serve as an attractive strategy for anti-HIV drug development. Several investigations suggest that there is a crucial need to develop a novel protease inhibitor with higher potency and reduced toxicity. Monte Carlo optimized QSAR study was performed on 200 hydroxyethylamine derivatives with antiprotease activity. Twenty-one QSAR models with good statistical qualities were developed from three different splits with various combinations of SMILES and GRAPH based descriptors. The best models from different splits were selected on the basis of statistically validated characteristics of the test set and have the following statistical parameters: r 2 = 0.806, Q 2 = 0.788 (split 1); r 2 = 0.842, Q 2 = 0.826 (split 2); r 2 = 0.774, Q 2 = 0.755 (split 3). The structural attributes obtained from the best models were analysed to understand the structural requirements of the selected series for HIV-1 protease inhibitory activity. On the basis of obtained structural attributes, 11 new compounds were designed, out of which five compounds were found to have better activity than the best active compound in the series.

  1. Partitioning and nanostructural evolution in model Ni-based superalloys containing W, Re, and Ru studied on a subnanometer scale

    International Nuclear Information System (INIS)

    Isheim, D.; Seidman, D.N.

    2004-01-01

    Full text: Modern Ni-based sueralloys, for example, Rene N6, rely on a complex microstructure and microchemistry to achieve their superior mechanical and physical properties with up to 10 or more alloying additions. Refractory metal additions are known to improve the high-temperature creep-resistance and the influence and interactions with various alloying additions have drawn much attention. We study partitioning behavior of the alloying elements, growth and coarsening kinetics of γ' (L1 2 structure) precipitates in a series of model superalloys containing W, Re, and Ru in the earlier stages of the transformation with precipitates several tens of nanometers in diameter. The three-dimensional elemental spatial distribution with respect to γ' (L1 2 structure) precipitates, their heterophase interfaces, and their temporal evolution with high-temperature aging are characterized by 3D atom-probe (3DAP) microscopy with subnanometer resolution. The overall microstructure is characterized by transmission electron microscopy (TEM), which helps in the spanning of length scales. The experimental characterization provides important input parameters for modeling of partitioning and nanostructural evolution by ThermoCalc and PrecipiCalc and thus allows for a critical test of the predictive capabilities of these models. (author)

  2. A monopoly pricing model for diffusion maximization based on heterogeneous nodes and negative network externalities (Case study: A novel product

    Directory of Open Access Journals (Sweden)

    Aghdas Badiee

    2018-10-01

    Full Text Available Social networks can provide sellers across the world with invaluable information about the structure of possible influences among different members of a network, whether positive or negative, and can be used to maximize diffusion in the network. Here, a novel mathematical monopoly product pricing model is introduced for maximization of market share in noncompetitive environment. In the proposed model, a customer’s decision to buy a product is not only based on the price, quality and need time for the product but also on the positive and negative influences of his/her neighbors. Therefore, customers are considered heterogeneous and a referral bonus is granted to every customer whose neighbors also buy the product. Here, the degree of influence is directly related to the intensity of the customers’ relationships. Finally, using the proposed model for a real case study, the optimal policy for product sales that is the ratio of product sale price in comparison with its cost and also the optimal amounts of referral bonus per customer is achieved.

  3. An artificial neural network prediction model of congenital heart disease based on risk factors: A hospital-based case-control study.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun

    2017-02-01

    An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio  = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0

  4. Nonparametric evaluation of quantitative traits in population-based association studies when the genetic model is unknown.

    Science.gov (United States)

    Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A

    2012-01-01

    Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible

  5. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    Science.gov (United States)

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  6. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Directory of Open Access Journals (Sweden)

    Xu Zhao

    2018-03-01

    Full Text Available A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS Inertial-Measurement-Unit (IMU. First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD. In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  7. Distribution of free radical products among the bases of x-irradiated DNA model systems: an ESR study

    International Nuclear Information System (INIS)

    Spalletta, R.A.

    1984-01-01

    Exposure of solid state DNA to ionizing radiation results in an ESR spectrum that has been attributed to a nonstoichiometric distribution of free radicals among the bases. At low temperatures radical cations appear to be stabilized on the purines while radical anions are stabilized on the pyrimidines. This distribution could arise from at least two different mechanisms. The first, charge transfer, involves the transfer of electrons and/or holes between stacked bases. In the second, saturation asymmetry, the free radical distribution arises from differences in the dose saturation characteristics of individual bases. The present study addresses the relative importance of charge transfer versus saturation asymmetry in the production of these population differences. Radicals formed by dissolving irradiated polycrystalline pyrimidines in aqueous solutions containing NtB or PBN spin traps were analyzed using ESR. The relative importance of the two free radical production and distribution mechanisms was assessed using DNA model systems. Saturation asymmetry plays a significant role in determining the free radical population while charge transfer was unambiguously observed in only one, the complex of dAMP and TMP. The results demonstrate that any quantitative analysis of charge transfer must take saturation asymmetry into account

  8. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Science.gov (United States)

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  9. Role of muscle pulleys in producing eye position-dependence in the angular vestibuloocular reflex: a model-based study

    Science.gov (United States)

    Thurtell, M. J.; Kunin, M.; Raphan, T.; Wall, C. C. (Principal Investigator)

    2000-01-01

    It is well established that the head and eye velocity axes do not always align during compensatory vestibular slow phases. It has been shown that the eye velocity axis systematically tilts away from the head velocity axis in a manner that is dependent on eye-in-head position. The mechanisms responsible for producing these axis tilts are unclear. In this model-based study, we aimed to determine whether muscle pulleys could be involved in bringing about these phenomena. The model presented incorporates semicircular canals, central vestibular pathways, and an ocular motor plant with pulleys. The pulleys were modeled so that they brought about a rotation of the torque axes of the extraocular muscles that was a fraction of the angle of eye deviation from primary position. The degree to which the pulleys rotated the torque axes was altered by means of a pulley coefficient. Model input was head velocity and initial eye position data from passive and active yaw head impulses with fixation at 0 degrees, 20 degrees up and 20 degrees down, obtained from a previous experiment. The optimal pulley coefficient required to fit the data was determined by calculating the mean square error between data and model predictions of torsional eye velocity. For active head impulses, the optimal pulley coefficient varied considerably between subjects. The median optimal pulley coefficient was found to be 0.5, the pulley coefficient required for producing saccades that perfectly obey Listing's law when using a two-dimensional saccadic pulse signal. The model predicted the direction of the axis tilts observed in response to passive head impulses from 50 ms after onset. During passive head impulses, the median optimal pulley coefficient was found to be 0.21, when roll gain was fixed at 0.7. The model did not accurately predict the alignment of the eye and head velocity axes that was observed early in the response to passive head impulses. We found that this alignment could be well predicted if

  10. Synchrotron-based intra-venous K-edge digital subtraction angiography in a pig model: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Schueltke, Elisabeth [Departments of Surgery, University of Saskatchewan, Saskatoon, SK (Canada); Anatomy and Cell Biology, University of Saskatchewan, Saskatoon, SK (Canada); Department of Neurological Sciences, Walton Medical Centre, University of Liverpool, Liverpool L97 LJ (United Kingdom)], E-mail: e.schultke@usask.ca; Fiedler, Stefan [European Molecular Biology Laboratory (EMBL), Nottkestrasse 85, 22603 Hamburg (Germany); Nemoz, Christian [European Synchrotron Radiation Facility (ESRF), 6 rue Horowitz, 38043 Grenoble (France); Ogieglo, Lissa [Departments of Surgery, University of Saskatchewan, Saskatoon, SK (Canada); Kelly, Michael E. [Departments of Surgery, University of Saskatchewan, Saskatoon, SK (Canada); Department of Neurosurgery, Section of Cerebrovascular and Endovascular Neurosurgery, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH (United States); Crawford, Paul [Royal Veterinary College, Hawkshead Lane, North Mymms, Hatfield, Herfordshire AL9 7TA (United Kingdom); Esteve, Francois [INSERM U836-ESRF, 6 rue Horowitz, 38043 Grenoble (France); Brochard, Thierry; Renier, Michel; Requardt, Herwig; Le Duc, Geraldine [European Synchrotron Radiation Facility (ESRF), 6 rue Horowitz, 38043 Grenoble (France); Juurlink, Bernhard [Anatomy and Cell Biology, University of Saskatchewan, Saskatoon, SK (Canada); Meguro, Kotoo [Departments of Surgery, University of Saskatchewan, Saskatoon, SK (Canada)

    2010-03-15

    Background: K-edge digital subtraction angiography (KEDSA) combined with the tunability of synchrotron beam yields an imaging technique that is highly sensitive to low concentrations of contrast agents. Thus, contrast agent can be administered intravenously, obviating the need for insertion of a guided catheter to deliver a bolus of contrast agent close to the target tissue. With the high-resolution detectors used at synchrotron facilities, images can be acquired at high spatial resolution. Thus, the KEDSA appears particularly suited for studies of neurovascular pathology in animal models, where the vascular diameters are significantly smaller than in human patients. Materials and methods: This feasibility study was designed to test the suitability of KEDSA after intravenous injection of iodine-based contrast agent for use in a pig model. Four adult male pigs were used for our experiments. Neurovascular angiographic images were acquired using KEDSA with a solid state Germanium (Ge) detector at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. Results: After intravenous injection of 0.9 ml/kg iodinated contrast agent (Xenetix), the peak iodine concentrations in the internal carotid and middle cerebral arteries reached 35 mg/ml. KEDSA images in radiography mode allowed the visualization of intracranial arteries of less than 1.5 mm diameter.

  11. Synchrotron-based intra-venous K-edge digital subtraction angiography in a pig model: A feasibility study

    International Nuclear Information System (INIS)

    Schueltke, Elisabeth; Fiedler, Stefan; Nemoz, Christian; Ogieglo, Lissa; Kelly, Michael E.; Crawford, Paul; Esteve, Francois; Brochard, Thierry; Renier, Michel; Requardt, Herwig; Le Duc, Geraldine; Juurlink, Bernhard; Meguro, Kotoo

    2010-01-01

    Background: K-edge digital subtraction angiography (KEDSA) combined with the tunability of synchrotron beam yields an imaging technique that is highly sensitive to low concentrations of contrast agents. Thus, contrast agent can be administered intravenously, obviating the need for insertion of a guided catheter to deliver a bolus of contrast agent close to the target tissue. With the high-resolution detectors used at synchrotron facilities, images can be acquired at high spatial resolution. Thus, the KEDSA appears particularly suited for studies of neurovascular pathology in animal models, where the vascular diameters are significantly smaller than in human patients. Materials and methods: This feasibility study was designed to test the suitability of KEDSA after intravenous injection of iodine-based contrast agent for use in a pig model. Four adult male pigs were used for our experiments. Neurovascular angiographic images were acquired using KEDSA with a solid state Germanium (Ge) detector at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. Results: After intravenous injection of 0.9 ml/kg iodinated contrast agent (Xenetix), the peak iodine concentrations in the internal carotid and middle cerebral arteries reached 35 mg/ml. KEDSA images in radiography mode allowed the visualization of intracranial arteries of less than 1.5 mm diameter.

  12. Synchrotron-based intra-venous K-edge digital subtraction angiography in a pig model: a feasibility study.

    Science.gov (United States)

    Schültke, Elisabeth; Fiedler, Stefan; Nemoz, Christian; Ogieglo, Lissa; Kelly, Michael E; Crawford, Paul; Esteve, Francois; Brochard, Thierry; Renier, Michel; Requardt, Herwig; Le Duc, Geraldine; Juurlink, Bernhard; Meguro, Kotoo

    2010-03-01

    K-edge digital subtraction angiography (KEDSA) combined with the tunability of synchrotron beam yields an imaging technique that is highly sensitive to low concentrations of contrast agents. Thus, contrast agent can be administered intravenously, obviating the need for insertion of a guided catheter to deliver a bolus of contrast agent close to the target tissue. With the high-resolution detectors used at synchrotron facilities, images can be acquired at high spatial resolution. Thus, the KEDSA appears particularly suited for studies of neurovascular pathology in animal models, where the vascular diameters are significantly smaller than in human patients. This feasibility study was designed to test the suitability of KEDSA after intravenous injection of iodine-based contrast agent for use in a pig model. Four adult male pigs were used for our experiments. Neurovascular angiographic images were acquired using KEDSA with a solid state Germanium (Ge) detector at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. After intravenous injection of 0.9 ml/kg iodinated contrast agent (Xenetix), the peak iodine concentrations in the internal carotid and middle cerebral arteries reached 35 mg/ml. KEDSA images in radiography mode allowed the visualization of intracranial arteries of less than 1.5mm diameter. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  13. A model based on spectrofluorimetry to study the interaction between glyphosate and serum albumin of Salminus brasiliensis

    Science.gov (United States)

    Escobar, Marta Araujo Cyrino; Cortez, Celia Martins; Silva, Dilson; Neto, Jayme da Cunha Bastos

    2017-11-01

    The aim of this work is to initiate an investigation on the albumin of Salminus brasiliensis (gold fish) as a biomarker of environmental actions of glyphosate. We started using a mathematical-computational model based on spectrofluorimetric measurements to study the interaction of glyphosate with gold fish albumin and human serum albumin. Salminus brasiliensis is a migratory freshwater fish species found in southern and central-western Brazil, mainly in the Prata river basin, where most of soybean plantations are set. Glyphosate is a very used herbicide in this type of crop. Differently from the organophosphorate methyl parathion, glyphosate does not form complex with HSA, and the quenching constants estimated for its binding with gold fish albumin at 20 °C and 25 °C is 1.3(± 0.3) × 104 / M e 2.5 (± 0.3) × 104 / M, respectively.

  14. Study on TCM Syndrome Differentiation of Primary Liver Cancer Based on the Analysis of Latent Structural Model

    Directory of Open Access Journals (Sweden)

    Zhan Gu

    2015-01-01

    Full Text Available Primary liver cancer (PLC is one of the most common malignant tumors because of its high incidence and high mortality. Traditional Chinese medicine (TCM plays an active role in the treatment of PLC. As the most important part in the TCM system, syndrome differentiation based on the clinical manifestations from traditional four diagnostic methods has met great challenges and questions with the lack of statistical validation support. In this study, we provided evidences for TCM syndrome differentiation of PLC using the method of analysis of latent structural model from clinic data, thus providing basis for establishing TCM syndrome criteria. And also we obtain the common syndromes of PLC as well as their typical clinical manifestations, respectively.

  15. Study on TCM Syndrome Differentiation of Primary Liver Cancer Based on the Analysis of Latent Structural Model.

    Science.gov (United States)

    Gu, Zhan; Qi, Xiuzhong; Zhai, Xiaofeng; Lang, Qingbo; Lu, Jianying; Ma, Changping; Liu, Long; Yue, Xiaoqiang

    2015-01-01

    Primary liver cancer (PLC) is one of the most common malignant tumors because of its high incidence and high mortality. Traditional Chinese medicine (TCM) plays an active role in the treatment of PLC. As the most important part in the TCM system, syndrome differentiation based on the clinical manifestations from traditional four diagnostic methods has met great challenges and questions with the lack of statistical validation support. In this study, we provided evidences for TCM syndrome differentiation of PLC using the method of analysis of latent structural model from clinic data, thus providing basis for establishing TCM syndrome criteria. And also we obtain the common syndromes of PLC as well as their typical clinical manifestations, respectively.

  16. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Science.gov (United States)

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  17. Disparities in spread and control of influenza in slums of Delhi: findings from an agent-based modelling study

    Science.gov (United States)

    Adiga, Abhijin; Chu, Shuyu; Eubank, Stephen; Kuhlman, Christopher J; Lewis, Bryan; Marathe, Achla; Marathe, Madhav; Nordberg, Eric K; Swarup, Samarth; Vullikanti, Anil; Wilson, Mandy L

    2018-01-01

    Objectives This research studies the role of slums in the spread and control of infectious diseases in the National Capital Territory of India, Delhi, using detailed social contact networks of its residents. Methods We use an agent-based model to study the spread of influenza in Delhi through person-to-person contact. Two different networks are used: one in which slum and non-slum regions are treated the same, and the other in which 298 slum zones are identified. In the second network, slum-specific demographics and activities are assigned to the individuals whose homes reside inside these zones. The main effects of integrating slums are that the network has more home-related contacts due to larger family sizes and more outside contacts due to more daily activities outside home. Various vaccination and social distancing interventions are applied to control the spread of influenza. Results Simulation-based results show that when slum attributes are ignored, the effectiveness of vaccination can be overestimated by 30%–55%, in terms of reducing the peak number of infections and the size of the epidemic, and in delaying the time to peak infection. The slum population sustains greater infection rates under all intervention scenarios in the network that treats slums differently. Vaccination strategy performs better than social distancing strategies in slums. Conclusions Unique characteristics of slums play a significant role in the spread of infectious diseases. Modelling slums and estimating their impact on epidemics will help policy makers and regulators more accurately prioritise allocation of scarce medical resources and implement public health policies. PMID:29358419

  18. Disparities in spread and control of influenza in slums of Delhi: findings from an agent-based modelling study.

    Science.gov (United States)

    Adiga, Abhijin; Chu, Shuyu; Eubank, Stephen; Kuhlman, Christopher J; Lewis, Bryan; Marathe, Achla; Marathe, Madhav; Nordberg, Eric K; Swarup, Samarth; Vullikanti, Anil; Wilson, Mandy L

    2018-01-21

    This research studies the role of slums in the spread and control of infectious diseases in the National Capital Territory of India, Delhi, using detailed social contact networks of its residents. We use an agent-based model to study the spread of influenza in Delhi through person-to-person contact. Two different networks are used: one in which slum and non-slum regions are treated the same, and the other in which 298 slum zones are identified. In the second network, slum-specific demographics and activities are assigned to the individuals whose homes reside inside these zones. The main effects of integrating slums are that the network has more home-related contacts due to larger family sizes and more outside contacts due to more daily activities outside home. Various vaccination and social distancing interventions are applied to control the spread of influenza. Simulation-based results show that when slum attributes are ignored, the effectiveness of vaccination can be overestimated by 30%-55%, in terms of reducing the peak number of infections and the size of the epidemic, and in delaying the time to peak infection. The slum population sustains greater infection rates under all intervention scenarios in the network that treats slums differently. Vaccination strategy performs better than social distancing strategies in slums. Unique characteristics of slums play a significant role in the spread of infectious diseases. Modelling slums and estimating their impact on epidemics will help policy makers and regulators more accurately prioritise allocation of scarce medical resources and implement public health policies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  20. Study of the coupling of geochemical models based on thermodynamic equilibrium with models of component transfer as solutions in porous media or fractures

    International Nuclear Information System (INIS)

    Coudrain-Ribstein, A.

    1985-01-01

    This study is a contribution of analyses possibilities of modelling the transfer of components in the underground taking into account complexes geochemical phenomena. In the first part, the aim and the methodology of existing codes are presented. The transfer codes describe with a great precision the physical phenomena of transport but they are based on a very simple conceptualisation of the geochemical phenomena of retention by the rock. The geochemical models are interested by a stable unity of volume. They allow to compute the equilibrium distribution of the components between the chemical species of the solution, and the solid and gaseous phases. They use important thermodynamic data bases corresponding to each possible reaction. To sum up the situation about the geochemical codes in Europe and United States, a list of about thirty codes describe their method and potentialities. The mathematical analysis of the different methods used in both types of codes is presented. Then, the principles of a modelisation associating the potentialities of the transport codes and the geochemical codes are discussed. It is not possible to think of a simple coupling. A general code must be established on the bases of the existing codes but also on new concepts and under new constraints. In such studies one must always deal with the problem of the reactions kinetics. When the velocity of the reactions is big enough versus the velocity of transport processes, the assumption of local geochemical equilibrium can be retained. A general code would be very cumbersome, expensive and difficult to use. The results would be difficult to analyse and exploit. On the other hand, for each case study, a detailed analysis can point out many computing simplifications without simplifying the concepts [fr

  1. Factors Affecting Acceptance of Hospital Information Systems Based on Extended Technology Acceptance Model: A Case Study in Three Paraclinical Departments.

    Science.gov (United States)

    Nadri, Hamed; Rahimi, Bahlol; Lotfnezhad Afshar, Hadi; Samadbeik, Mahnaz; Garavand, Ali

    2018-04-01

     Regardless of the acceptance of users, information and communication systems can be considered as a health intervention designed to improve the care delivered to patients. This study aimed to determine the adoption and use of the extended Technology Acceptance Model (TAM2) by the users of hospital information system (HIS) in paraclinical departments including laboratory, radiology, and nutrition and to investigate the key factors of adoption and use of these systems.  A standard questionnaire was used to collect the data from nearly 253 users of these systems in paraclinical departments of eight university hospitals in two different cities of Iran. A total of 202 questionnaires including valid responses were used in this study (105 in Urmia and 97 in Khorramabad). The data were processed using LISREL and SPSS software and statistical analysis technique was based on the structural equation modeling (SEM).  It was found that the original TAM constructs had a significant impact on the staffs' behavioral intention to adopt HIS in paraclinical departments. The results of this study indicated that cognitive instrumental processes (job relevance, output quality, result demonstrability, and perceived ease of use), except for result demonstrability, were significant predictors of intention to use, whereas the result revealed no significant relationship between social influence processes (subjective norm, voluntariness, and image) and the users' behavioral intention to use the system.  The results confirmed that several factors in the TAM2 that were important in previous studies were not significant in paraclinical departments and in government-owned hospitals. The users' behavior factors are essential for successful usage of the system and should be considered. It provides valuable information for hospital system providers and policy makers in understanding the adoption challenges as well as practical guidance for the successful implementation of information

  2. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    Directory of Open Access Journals (Sweden)

    Morris Denise

    2007-09-01

    to 100%, respectively. C: The majority of the study participants found the codes to be useful (71%, applicable (92% and sufficient (92%. Conclusion Systematic registration of HP activities is relevant in clinical day life and the suggested codes proved to be applicable for international use. HP is an essential part of the clinical pathway or the value chain. This model promises to improve the documentation and thereby facilitate analysis of records for evidence based medicine as well as cost and policy analyses.

  3. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    2017-06-01

    Full Text Available By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  4. An in situ USAXS-SAXS-WAXS study of precipitate size distribution evolution in a model Ni-based alloy.

    Science.gov (United States)

    Andrews, Ross N; Serio, Joseph; Muralidharan, Govindarajan; Ilavsky, Jan

    2017-06-01

    Intermetallic γ' precipitates typically strengthen nickel-based superalloys. The shape, size and spatial distribution of strengthening precipitates critically influence alloy strength, while their temporal evolution characteristics determine the high-temperature alloy stability. Combined ultra-small-, small- and wide-angle X-ray scattering (USAXS-SAXS-WAXS) analysis can be used to evaluate the temporal evolution of an alloy's precipitate size distribution (PSD) and phase structure during in situ heat treatment. Analysis of PSDs from USAXS-SAXS data employs either least-squares fitting of a preordained PSD model or a maximum entropy (MaxEnt) approach, the latter avoiding a priori definition of a functional form of the PSD. However, strong low- q scattering from grain boundaries and/or structure factor effects inhibit MaxEnt analysis of typical alloys. This work describes the extension of Bayesian-MaxEnt analysis methods to data exhibiting structure factor effects and low- q power law slopes and demonstrates their use in an in situ study of precipitate size evolution during heat treatment of a model Ni-Al-Si alloy.

  5. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory.

    Science.gov (United States)

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-06-15

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  6. Three-dimensional model of plate geometry and velocity model for Nankai Trough seismogenic zone based on results from structural studies

    Science.gov (United States)

    Nakanishi, A.; Shimomura, N.; Kodaira, S.; Obana, K.; Takahashi, T.; Yamamoto, Y.; Yamashita, M.; Takahashi, N.; Kaneda, Y.

    2012-12-01

    In the Nankai Trough subduction seismogenic zone, the Nankai and Tonankai earthquakes had often occurred simultaneously, and caused a great event. In order to reduce a great deal of damage to coastal area from both strong ground motion and tsunami generation, it is necessary to understand rupture synchronization and segmentation of the Nankai megathrust earthquake. For a precise estimate of the rupture zone of the Nankai megathrust event based on the knowledge of realistic earthquake cycle and variation of magnitude, it is important to know the geometry and property of the plate boundary of the subduction seismogenic zone. To improve a physical model of the Nankai Trough seismogenic zone, the large-scale high-resolution wide-angle and reflection (MCS) seismic study, and long-term observation has been conducted since 2008. Marine active source seismic data have been acquired along grid two-dimensional profiles having the total length of ~800km every year. A three-dimensional seismic tomography using active and passive seismic data observed both land and ocean bottom stations have been also performed. From those data, we found that several strong lateral variations of the subducting Philippine Sea plate and overriding plate corresponding to margins of coseismic rupture zone of historical large event occurred along the Nankai Trough. Particularly a possible prominent reflector for the forearc Moho is recently imaged in the offshore side in the Kii channel at the depth of ~18km which is shallower than those of other area along the Nankai Trough. Such a drastic variation of the overriding plate might be related to the existence of the segmentation of the Nankai megathrust earthquake. Based on our results derived from seismic studies, we have tried to make a geometrical model of the Philippine Sea plate and a three-dimensional velocity structure model of the Nankai Trough seismogenic zone. In this presentation, we will summarize major results of out seismic studies, and

  7. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  8. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  9. A Study on the Response Characteristics of a Fiber-Optic Radiation Sensor Model Based on Cerenkov Principle

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hwa Jeong; Kim, Beom Kyu; Park, Byung Gi [Soonchunhyang Univ., Asan (Korea, Republic of)

    2016-10-15

    In recent year, various fiber-optic radiation sensors using Cerenkov principle have been developed without employing any scintillators for measuring high-energy photon, electron, etc. The main advantages of the optical fibers are the remote transmission of the light signal and immunity to pressure and electromagnetic waves. Therefore, the sensors utilizing the optical fibers can be used in hazardous radiation environments, such as the high-level radiation areas of a nuclear facility. The study to be simulated a fiber-optic radiation sensor based on Cerenkov principle and to be analyzed the response characteristics of the sensor. For the aforementioned study, the GEANT simulation toolkit was used. It is able to take into all the optical properties of fibers and is found to be appropriate to realistically describe the response of fiber-optic radiation sensor. In the recently, the fiber-optic radiation sensor have been developed in nuclear industry. Because sensor can detect gamma ray in harsh nuclear environments. In this study, we analyzed response characteristics of the fiber-optic radiation sensor. We have simulated the Monte Carlo model, for detecting the Cerenkov radiation using the fiber-optic radiation sensor. And the y-axis distribution of Cerenkov photons was obtained using output file. Simulation is performed with reference to the method of the previous research, and then the simulation results exhibited a good agreement with the previous research.

  10. A Study on the Response Characteristics of a Fiber-Optic Radiation Sensor Model Based on Cerenkov Principle

    International Nuclear Information System (INIS)

    Han, Hwa Jeong; Kim, Beom Kyu; Park, Byung Gi

    2016-01-01

    In recent year, various fiber-optic radiation sensors using Cerenkov principle have been developed without employing any scintillators for measuring high-energy photon, electron, etc. The main advantages of the optical fibers are the remote transmission of the light signal and immunity to pressure and electromagnetic waves. Therefore, the sensors utilizing the optical fibers can be used in hazardous radiation environments, such as the high-level radiation areas of a nuclear facility. The study to be simulated a fiber-optic radiation sensor based on Cerenkov principle and to be analyzed the response characteristics of the sensor. For the aforementioned study, the GEANT simulation toolkit was used. It is able to take into all the optical properties of fibers and is found to be appropriate to realistically describe the response of fiber-optic radiation sensor. In the recently, the fiber-optic radiation sensor have been developed in nuclear industry. Because sensor can detect gamma ray in harsh nuclear environments. In this study, we analyzed response characteristics of the fiber-optic radiation sensor. We have simulated the Monte Carlo model, for detecting the Cerenkov radiation using the fiber-optic radiation sensor. And the y-axis distribution of Cerenkov photons was obtained using output file. Simulation is performed with reference to the method of the previous research, and then the simulation results exhibited a good agreement with the previous research

  11. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    Science.gov (United States)

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-09-08

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.

  12. Species delineation using Bayesian model-based assignment tests: a case study using Chinese toad-headed agamas (genus Phrynocephalus

    Directory of Open Access Journals (Sweden)

    Fu Jinzhong

    2010-06-01

    Full Text Available Abstract Background Species are fundamental units in biology, yet much debate exists surrounding how we should delineate species in nature. Species discovery now requires the use of separate, corroborating datasets to quantify independently evolving lineages and test species criteria. However, the complexity of the speciation process has ushered in a need to infuse studies with new tools capable of aiding in species delineation. We suggest that model-based assignment tests are one such tool. This method circumvents constraints with traditional population genetic analyses and provides a novel means of describing cryptic and complex diversity in natural systems. Using toad-headed agamas of the Phrynocephalus vlangalii complex as a case study, we apply model-based assignment tests to microsatellite DNA data to test whether P. putjatia, a controversial species that closely resembles P. vlangalii morphologically, represents a valid species. Mitochondrial DNA and geographic data are also included to corroborate the assignment test results. Results Assignment tests revealed two distinct nuclear DNA clusters with 95% (230/243 of the individuals being assigned to one of the clusters with > 90% probability. The nuclear genomes of the two clusters remained distinct in sympatry, particularly at three syntopic sites, suggesting the existence of reproductive isolation between the identified clusters. In addition, a mitochondrial ND2 gene tree revealed two deeply diverged clades, which were largely congruent with the two nuclear DNA clusters, with a few exceptions. Historical mitochondrial introgression events between the two groups might explain the disagreement between the mitochondrial and nuclear DNA data. The nuclear DNA clusters and mitochondrial clades corresponded nicely to the hypothesized distributions of P. vlangalii and P. putjatia. Conclusions These results demonstrate that assignment tests based on microsatellite DNA data can be powerful tools

  13. A systematic study of ball passing frequencies based on dynamic modeling of rolling ball bearings with localized surface defects

    Science.gov (United States)

    Niu, Linkai; Cao, Hongrui; He, Zhengjia; Li, Yamin

    2015-11-01

    Ball passing frequencies (BPFs) are very important features for condition monitoring and fault diagnosis of rolling ball bearings. The ball passing frequency on outer raceway (BPFO) and the ball passing frequency on inner raceway (BPFI) are usually calculated by two well-known kinematics equations. In this paper, a systematic study of BPFs of rolling ball bearings is carried out. A novel method for accurately calculating BPFs based on a complete dynamic model of rolling ball bearings with localized surface defects is proposed. In the used dynamic model, three-dimensional motions, relative slippage, cage effects and localized surface defects are all considered. Moreover, localized surface defects are modeled accurately with consideration of the finite size of the ball, the additional clearance due to material absence, and changes of contact force directions. The reasonability of the proposed method for the prediction of dynamic behaviors of actual ball bearings with localized surface defects and for the calculation of BPFs is discussed by investigating the motion characteristics of a ball when it rolls through a defect. Parametric investigation shows that the shaft speed, external loads, the friction coefficient, raceway groove curvature factors, the initial contact angle, and defect sizes have great effects on BPFs. For a loaded ball bearing, the combination of rolling and sliding in contact region occurs, and the BPFs calculated by simple kinematical relationships are inaccurate, especially for high speed, low external load, and large initial contact angle conditions where severe skidding occurs. The hypothesis that the percentage variation of the spacing between impulses in a defective ball bearing was about 1-2% reported in previous investigations can be satisfied only for the conditions where the skidding effect in a bearing is slight. Finally, the proposed method is verified with two experiments.

  14. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lim, Sang Moo [KIRAMS, Seoul (Korea, Republic of)

    2005-07-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model.

  15. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo; Lim, Sang Moo

    2005-01-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model

  16. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  17. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  18. HOMER Based Feasibility Study of Off-Grid Biogas Power Generation Model Using Poultry Litter for Rural Bangladesh

    OpenAIRE

    Sharmin Sobhan; Tanvir Ahmad; Md. Jakaria Rahimi; Md. Habib Ullah; Shaila Arif

    2016-01-01

    Lack of access to electricity is one of the major impediments to the economic growth and development for any developing country. As well as limited reserve of conventional fuel and geo-location of Bangladesh arise the demand to find an effective alternative energy source for rural electrification. This document approaches a poultry-home based power generation model for rural Bangladesh and diagnosis its feasibility through HOMER, a micro power modelling and optimization software. The introduc...

  19. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  20. Predictive Accuracy of a Cardiovascular Disease Risk Prediction Model in Rural South India – A Community Based Retrospective Cohort Study

    Directory of Open Access Journals (Sweden)

    Farah N Fathima

    2015-03-01

    Full Text Available Background: Identification of individuals at risk of developing cardiovascular diseases by risk stratification is the first step in primary prevention. Aims & Objectives: To assess the five year risk of developing a cardiovascular event from retrospective data and to assess the predictive accuracy of the non laboratory based National Health and Nutrition Examination Survey (NHANES risk prediction model among individuals in a rural South Indian population. Materials & Methods: A community based retrospective cohort study was conducted in three villages where risk stratification was done for all eligible adults aged between 35-74 years at the time of initial assessment using the NHANES risk prediction charts. Household visits were made after a period of five years by trained doctors to determine cardiovascular outcomes. Results: 521 people fulfilled the eligibility criteria of whom 486 (93.3% could be traced after five years. 56.8% were in low risk, 36.6% were in moderate risk and 6.6% were in high risk categories. 29 persons (5.97% had had cardiovascular events over the last five years of which 24 events (82.7% were nonfatal and five (17.25% were fatal. The mean age of the people who developed cardiovascular events was 57.24 ± 9.09 years. The odds ratios for the three levels of risk showed a linear trend with the odds ratios for the moderate risk and high risk category being 1.35 and 1.94 respectively with the low risk category as baseline. Conclusion: The non laboratory based NHANES charts did not accurately predict the occurrence of cardiovascular events in any of the risk categories.

  1. Risk based modelling

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Baker, A.E.

    1993-01-01

    Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)

  2. Association between Perceived Value and Self-Medication with Antibiotics: An Observational Study Based on Health Belief Model Theory

    Directory of Open Access Journals (Sweden)

    Annisa N. Insany

    2015-06-01

    Full Text Available High prevalence of self medication with antibiotics can increase the probability of irrational use of antibiotics which may lead antibiotics resistance. Thus, shifting of behavior is required to minimize the irrational use of antibiotics. This study was aimed to determine the association between public perceived value and self-medication with antibiotics which can be used to develop an intervention model in order to reduce the practice of self-medication with antibiotics. An observational study was conducted during the period of November–December 2014.The subjects were patients who visit primary health care facilities in Bandung. A structured-interview that has been validated was used to investigate the association between perceived value and self-medication behavior based on the Health Belief Model theory (perceived susceptibility, benefits, barrier, and cues to action. Approximately 506 respondents were drawn randomly from 43 community healthcare centers and 8 pharmacies. Data was analyzed by using descriptive statistics and logistic regression (CI 95%, α = 5%. Validity and reliability of the questionnaire were shown with a correlation coefficient of >0.3 and a cronbach-alpha value of 0.719, respectively. We found that 29.45% of respondents practiced self-medication with antibiotics over the last six months. Additionally, there was no significant association between the perceived susceptibility, benefits, barrier, and cues to action with self-medication behavior (p>0.05. Easiness to access antibiotics without prescription was presumed as a factor that contribute to self-medication with antibiotics, therefore strict regulation in antibiotics use is very needed as a basic intervention to decrease self-medication with antibiotic.

  3. Decision making for business model development : A process study of effectuation and causation in new technology-based ventures

    NARCIS (Netherlands)

    Reymen, Isabelle; Berends, Hans; Oudehand, Rob; Stultiëns, Rutger

    2017-01-01

    This study investigates the decision-making logics used by new ventures to develop their business models. In particular, they focussed on the logics of effectuation and causation and how their dynamics shape the development of business models over time. They found that the effectual decision-making

  4. Student Collaboration in a Series of Integrated Experiments to Study Enzyme Reactor Modeling with Immobilized Cell-Based Invertase

    Science.gov (United States)

    Taipa, M. A^ngela; Azevedo, Ana M.; Grilo, Anto´nio L.; Couto, Pedro T.; Ferreira, Filipe A. G.; Fortuna, Ana R. M.; Pinto, Ine^s F.; Santos, Rafael M.; Santos, Susana B.

    2015-01-01

    An integrative laboratory study addressing fundamentals of enzyme catalysis and their application to reactors operation and modeling is presented. Invertase, a ß-fructofuranosidase that catalyses the hydrolysis of sucrose, is used as the model enzyme at optimal conditions (pH 4.5 and 45 °C). The experimental work involves 3 h of laboratory time…

  5. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  6. Determining the infrared radiative effects of Saharan dust: a radiative transfer modelling study based on vertically resolved measurements at Lampedusa

    Science.gov (United States)

    Meloni, Daniela; di Sarra, Alcide; Brogniez, Gérard; Denjean, Cyrielle; De Silvestri, Lorenzo; Di Iorio, Tatiana; Formenti, Paola; Gómez-Amo, José L.; Gröbner, Julian; Kouremeti, Natalia; Liuzzi, Giuliano; Mallet, Marc; Pace, Giandomenico; Sferlazzo, Damiano M.

    2018-03-01

    Detailed measurements of radiation, atmospheric and aerosol properties were carried out in summer 2013 during the Aerosol Direct Radiative Impact on the regional climate in the MEDiterranean region (ADRIMED) campaign in the framework of the Chemistry-Aerosol Mediterranean Experiment (ChArMEx) experiment. This study focusses on the characterization of infrared (IR) optical properties and direct radiative effects of mineral dust, based on three vertical profiles of atmospheric and aerosol properties and IR broadband and narrowband radiation from airborne measurements, made in conjunction with radiosonde and ground-based observations at Lampedusa, in the central Mediterranean. Satellite IR spectra from the Infrared Atmospheric Sounder Interferometer (IASI) are also included in the analysis. The atmospheric and aerosol properties are used as input to a radiative transfer model, and various IR radiation parameters (upward and downward irradiance, nadir and zenith brightness temperature at different altitudes) are calculated and compared with observations. The model calculations are made for different sets of dust particle size distribution (PSD) and refractive index (RI), derived from observations and from the literature. The main results of the analysis are that the IR dust radiative forcing is non-negligible and strongly depends on PSD and RI. When calculations are made using the in situ measured size distribution, it is possible to identify the refractive index that produces the best match with observed IR irradiances and brightness temperatures (BTs). The most appropriate refractive indices correspond to those determined from independent measurements of mineral dust aerosols from the source regions (Tunisia, Algeria, Morocco) of dust transported over Lampedusa, suggesting that differences in the source properties should be taken into account. With the in situ size distribution and the most appropriate refractive index the estimated dust IR radiative forcing

  7. Modeling and parametric study of a 1 kWe HT-PEMFC-based residential micro-CHP system

    DEFF Research Database (Denmark)

    Arsalis, Alexandros; Nielsen, Mads Pagh; Kær, Søren Knudsen

    2011-01-01

    A detailed thermodynamic, kinetic and geometric model of a micro-CHP (Combined-Heatand-Power) residential system based on High Temperature-Proton Exchange Membrane Fuel Cell (HT-PEMFC) technology is developed, implemented and validated. HT-PEMFC technology is investigated as a possible candidate...

  8. Agent-based modelling in applied ethology: an exploratory case study of behavioural dynamics in tail biting in pigs

    NARCIS (Netherlands)

    Boumans, I.J.M.M.; Hofstede, G.J.; Bolhuis, J.E.; Boer, de I.J.M.; Bokkers, E.A.M.

    2016-01-01

    Understanding behavioural dynamics in pigs is important to assess pig welfare in current intensive pig production systems. Agent-based modelling (ABM) is an approach to gain insight into behavioural dynamics in pigs, but its use in applied ethology and animal welfare science has been limited so far.

  9. An economic theory-based explanatory model of agricultural land-use patterns: The Netherlands as a case study

    NARCIS (Netherlands)

    Diogo, V.; Koomen, E.; Kuhlman, T.

    2015-01-01

    An economic theory-based land-use modelling framework is presented aiming to explain the causal link between economic decisions and resulting spatial patterns of agricultural land use. The framework assumes that farmers pursue utility maximisation in agricultural production systems, while

  10. Effect of DEM resolution on rainfall-triggered landslide modeling within a triangulated network-based model. A case study in the Luquillo Forest, Puerto Rico

    Science.gov (United States)

    Arnone, E.; Dialynas, Y. G.; Noto, L. V.; Bras, R. L.

    2013-12-01

    Catchment slope distribution is one of the topographic characteristics that significantly control rainfall-triggered landslide modeling, in both direct and indirect ways. Slope directly determines the soil volume associated with instability. Indirectly slope also affects the subsurface lateral redistribution of soil moisture across the basin, which in turn determines the water pore pressure conditions that impact slope stability. In this study, we investigate the influence of DEM resolution on slope stability and the slope stability analysis by using a distributed eco-hydrological and landslide model, the tRIBS-VEGGIE (Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator - VEGetation Generator for Interactive Evolution). The model implements a triangulated irregular network to describe the topography, and it is capable of evaluating vegetation dynamics and predicting shallow landslides triggered by rainfall. The impact of DEM resolution on the landslide prediction was studied using five TINs derived from five grid DEMs at different resolutions, i.e. 10, 20, 30, 50 and 70 m respectively. The analysis was carried out on the Mameyes Basin, located in the Luquillo Experimental Forest in Puerto Rico, where previous landslide analyses have been carried out. Results showed that the use of the irregular mesh reduced the loss of accuracy in the derived slope distribution when coarser resolutions were used. The impact of the different resolutions on soil moisture patterns was important only when the lateral redistribution was considerable, depending on hydrological properties and rainfall forcing. In some cases, the use of different DEM resolutions did not significantly affect tRIBS-VEGGIE landslide output, in terms of landslide locations, and values of slope and soil moisture at failure.

  11. Knowledge-based model of competition in restaurant industry: a qualitative study about culinary competence, creativity, and innovation in five full-service restaurants in Jakarta

    OpenAIRE

    NAPITUPULU JOSHUA H.; ASTUTI ENDANG SITI; HAMID DJAMHUR; RAHARDJO KUSDI

    2016-01-01

    The purpose of the study is to have an in-depth description in the form of the analysis of culinary competence, creativity and innovation that develops knowledge-based model of competence in full-service restaurant business. Studies on restaurant generally focused on customers more particularly customer’s satisfaction and loyalty, and very few studies discussed internal competitive factors in restaurant business. The study aims at filling out the research gap, using knowledge-based approach t...

  12. Model-based consensus

    NARCIS (Netherlands)

    Boumans, M.; Martini, C.; Boumans, M.

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  13. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  14. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  15. A validated agent-based model to study the spatial and temporal heterogeneities of malaria incidence in the rainforest environment.

    Science.gov (United States)

    Pizzitutti, Francesco; Pan, William; Barbieri, Alisson; Miranda, J Jaime; Feingold, Beth; Guedes, Gilvan R; Alarcon-Valenzuela, Javiera; Mena, Carlos F

    2015-12-22

    The Amazon environment has been exposed in the last decades to radical changes that have been accompanied by a remarkable rise of both Plasmodium falciparum and Plasmodium vivax malaria. The malaria transmission process is highly influenced by factors such as spatial and temporal heterogeneities of the environment and individual-based characteristics of mosquitoes and humans populations. All these determinant factors can be simulated effectively trough agent-based models. This paper presents a validated agent-based model of local-scale malaria transmission. The model reproduces the environment of a typical riverine village in the northern Peruvian Amazon, where the malaria transmission is highly seasonal and apparently associated with flooding of large areas caused by the neighbouring river. Agents representing humans, mosquitoes and the two species of Plasmodium (P. falciparum and P. vivax) are simulated in a spatially explicit representation of the environment around the village. The model environment includes: climate, people houses positions and elevation. A representation of changes in the mosquito breeding areas extension caused by the river flooding is also included in the simulation environment. A calibration process was carried out to reproduce the variations of the malaria monthly incidence over a period of 3 years. The calibrated model is also able to reproduce the spatial heterogeneities of local scale malaria transmission. A "what if" eradication strategy scenario is proposed: if the mosquito breeding sites are eliminated through mosquito larva habitat management in a buffer area extended at least 200 m around the village, the malaria transmission is eradicated from the village. The use of agent-based models can reproduce effectively the spatiotemporal variations of the malaria transmission in a low endemicity environment dominated by river floodings like in the Amazon.

  16. A comprehensive tool for image-based generation of fetus and pregnant women mesh models for numerical dosimetry studies

    International Nuclear Information System (INIS)

    Dahdouh, S; Serrurier, A; De la Plata, J-P; Anquez, J; Angelini, E D; Bloch, I; Varsier, N; Wiart, J

    2014-01-01

    Fetal dosimetry studies require the development of accurate numerical 3D models of the pregnant woman and the fetus. This paper proposes a 3D articulated fetal growth model covering the main phases of pregnancy and a pregnant woman model combining the utero-fetal structures and a deformable non-pregnant woman body envelope. The structures of interest were automatically or semi-automatically (depending on the stage of pregnancy) segmented from a database of images and surface meshes were generated. By interpolating linearly between fetal structures, each one can be generated at any age and in any position. A method is also described to insert the utero-fetal structures in the maternal body. A validation of the fetal models is proposed, comparing a set of biometric measurements to medical reference charts. The usability of the pregnant woman model in dosimetry studies is also investigated, with respect to the influence of the abdominal fat layer. (paper)

  17. Multilayer perceptron neural network-based approach for modeling phycocyanin pigment concentrations: case study from lower Charles River buoy, USA.

    Science.gov (United States)

    Heddam, Salim

    2016-09-01

    This paper proposes multilayer perceptron neural network (MLPNN) to predict phycocyanin (PC) pigment using water quality variables as predictor. In the proposed model, four water quality variables that are water temperature, dissolved oxygen, pH, and specific conductance were selected as the inputs for the MLPNN model, and the PC as the output. To demonstrate the capability and the usefulness of the MLPNN model, a total of 15,849 data measured at 15-min (15 min) intervals of time are used for the development of the model. The data are collected at the lower Charles River buoy, and available from the US Environmental Protection Agency (USEPA). For comparison purposes, a multiple linear regression (MLR) model that was frequently used for predicting water quality variables in previous studies is also built. The performances of the models are evaluated using a set of widely used statistical indices. The performance of the MLPNN and MLR models is compared with the measured data. The obtained results show that (i) the all proposed MLPNN models are more accurate than the MLR models and (ii) the results obtained are very promising and encouraging for the development of phycocyanin-predictive models.

  18. Model-based studies into ground water movement, with water density depending on salt content. Case studies and model validation with respect to the long-term safety of radwaste repositories. Final report

    International Nuclear Information System (INIS)

    Schelkes, K.

    1995-12-01

    Near-to-reality studies into ground water movement in the environment of planned radwaste repositories have to take into account that the flow conditions are influenced by the water density which in turn depends on the salt content. Based on results from earlier studies, computer programs were established that allow computation and modelling of ground water movement in salt water/fresh water systems, and the programs were tested and improved according to progress of the studies performed under the INTRAVAL international project. The computed models of ground water movement in the region of the Gorlebener Rinne showed for strongly simplified model profiles that the developing salinity distribution varies very sensitively in response to the applied model geometry, initial input data for salinity distribution, time frame of the model, and size of the transversal dispersion length. The WIPP 2 INTRAVAL experiment likewise studied a large-area ground water movement system influenced by salt water. Based on the concept of a hydraulically closed, regional ground water system (basin model), a sectional profile was worked out covering all relevant layers of the cap rock above the salt formation planned to serve as a repository. The model data derived to describe the salt water/fresh water movements in this profile resulted in essential enlargements and modifications of the ROCKFLOW computer program applied, (relating to input data for dispersion modelling, particle-tracker, computer graphics interface), and yielded important information for the modelling of such systems (relating to initial pressure data at the upper margin, network enhancement for important concentration boundary conditions, or treatment of permeability contrasts). (orig.) [de

  19. Model-based and design-based inference goals frame how to account for neighborhood clustering in studies of health in overlapping context types.

    Science.gov (United States)

    Lovasi, Gina S; Fink, David S; Mooney, Stephen J; Link, Bruce G

    2017-12-01

    Accounting for non-independence in health research often warrants attention. Particularly, the availability of geographic information systems data has increased the ease with which studies can add measures of the local "neighborhood" even if participant recruitment was through other contexts, such as schools or clinics. We highlight a tension between two perspectives that is often present, but particularly salient when more than one type of potentially health-relevant context is indexed (e.g., both neighborhood and school). On the one hand, a model-based perspective emphasizes the processes producing outcome variation, and observed data are used to make inference about that process. On the other hand, a design-based perspective emphasizes inference to a well-defined finite population, and is commonly invoked by those using complex survey samples or those with responsibility for the health of local residents. These two perspectives have divergent implications when deciding whether clustering must be accounted for analytically and how to select among candidate cluster definitions, though the perspectives are by no means monolithic. There are tensions within each perspective as well as between perspectives. We aim to provide insight into these perspectives and their implications for population health researchers. We focus on the crucial step of deciding which cluster definition or definitions to use at the analysis stage, as this has consequences for all subsequent analytic and interpretational challenges with potentially clustered data.

  20. Model-based and design-based inference goals frame how to account for neighborhood clustering in studies of health in overlapping context types

    Directory of Open Access Journals (Sweden)

    Gina S. Lovasi

    2017-12-01

    Full Text Available Accounting for non-independence in health research often warrants attention. Particularly, the availability of geographic information systems data has increased the ease with which studies can add measures of the local “neighborhood” even if participant recruitment was through other contexts, such as schools or clinics. We highlight a tension between two perspectives that is often present, but particularly salient when more than one type of potentially health-relevant context is indexed (e.g., both neighborhood and school. On the one hand, a model-based perspective emphasizes the processes producing outcome variation, and observed data are used to make inference about that process. On the other hand, a design-based perspective emphasizes inference to a well-defined finite population, and is commonly invoked by those using complex survey samples or those with responsibility for the health of local residents. These two perspectives have divergent implications when deciding whether clustering must be accounted for analytically and how to select among candidate cluster definitions, though the perspectives are by no means monolithic. There are tensions within each perspective as well as between perspectives. We aim to provide insight into these perspectives and their implications for population health researchers. We focus on the crucial step of deciding which cluster definition or definitions to use at the analysis stage, as this has consequences for all subsequent analytic and interpretational challenges with potentially clustered data.

  1. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2014-12-15

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities.

  2. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  3. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  4. The Assessment of Green Water Based on the SWAT Model: A Case Study in the Hai River Basin, China

    Directory of Open Access Journals (Sweden)

    Kui Zhu

    2018-06-01

    Full Text Available Green water accounts for two-thirds of precipitation, and the proportion could be even higher in dry years. Conflicts between water supply and demand have gradually become severe in the Hai River Basin (HRB due to the socio-economic development. Thus, the exploitation and the utilization of green water have attracted increasing attention. By gathering the related hydrological, meteorological, and geographic data, the spatiotemporal distribution of green water in HRB and the impacts of land use types on green water are analyzed based on the SWAT (Soil and Water Assessment Tool model in this study. Furthermore, three new indices are proposed for evaluation, including the maximum possible storage of green water (MSGW, the consumed green water (CGW, and the utilizable green water (UGW. The results show that (1 the MSGW is relatively low in plain areas and its spatial distribution is significantly associated with the soil type; (2 according to the evaluation results of CGW and UGW in HRB, a further improvement of utilization efficiency of green water could be achieved; (3 in general, the utilization efficiency of precipitation in farmlands is higher than other land use types, which means that the planting of appropriate plants could be helpful to enhance the utilization efficiency of green water. Our results summarize the spatiotemporal distribution of green water resource and provide a reference for water resources management in other water-short agricultural areas.

  5. Performance evaluation in competence-based learning model in higher education scenarios using social network: a case study

    Directory of Open Access Journals (Sweden)

    Katherina Edith GALLARDO CÓRDOVA

    2017-12-01

    Full Text Available A research about performance evaluation was conducted in a graduate online course designed in the Based-Competency Model. Facebook was used as a social and interactive tool that would permit sharing information to illustrate various aspects of diverse educational contexts as well as the impacts of the implementation of improvement projects seen from the beneficiaries’ perspective. Case Study was the methodology selected. Postgraduate students got the task to work on certain improvements on learning assessment matters. The educational scenarios were located in Mexico and Colombia. 7 units of analysis were chosen among 34 possible. The findings pointed out that students worked on their contexts in alignment with the stipulated academic competencies. The use of video materials posted and shared using Facebook allowed get a deeper understanding of the way the benefits influenced in each of the educational communities. Besides, these products evidenced students’ appropriate performance. In conclusion, the use of social networks for fortifying performance assessment is highly recommended. Moreover, it is expected that these benefits also influence some of the curricular and instructional design aspects.

  6. Electronic medical record in central polyclinic of isfahan oil industry: a case study based on technology acceptance model.

    Science.gov (United States)

    Tavakoli, Nahid; Jahanbakhsh, Maryam; Shahin, Arash; Mokhtari, Habibollah; Rafiei, Maryam

    2013-03-01

    Today, health information technologies are base of health services and Electronic Medical Record is one of them. The purpose of this paper is to investigate the Technology Acceptance Model (TAM) on EMR at Central Polyclinic Oil Industry that is a pioneer in implementation of EMR in Isfahan. This study was an applied and analytical survey that it was done at the Central Polyclinic Oil Industry. Because statistical population were limited, sampling bas been done by conducting the census and the sample was according to the population. The data was collected by a researcher-made questionnaire that it was validated by experts and its reliability was confirmed by test retest. The questionnaire was developed in 5 scopes including external factors (data quality and user interface), perceived usefulness, perceived ease of usefulness, attitude toward using, and behavioral intention to use. The Results analyzed by SPSS. There was a significant relationship between data quality with PU(r=/295, p/005). The survey of the scopes in the polyclinic showed that there is relationship among user interface, perceived usefulness, perceived ease of usefulness, attitude toward using, and behavioral intention to use, but data quality has no relationship with attitude. It seems the system designers didn't consider to data quality characteristics. It is proposed that they consult with health information management professionals for improvement the existing system.

  7. A study on the energy management in domestic micro-grids based on Model Predictive Control strategies

    International Nuclear Information System (INIS)

    Bruni, G.; Cordiner, S.; Mulone, V.; Rocco, V.; Spagnolo, F.

    2015-01-01

    Highlights: • Development of a domestic microgrid and house thermal model. • Model Predictive Control for simultaneous management of power flow and thermal comfort. • Modeling of summer and winter typical conditions. • Comparison with standard rule based controller results. • Fuel cell downsizing potential of output is up to 60%. - Abstract: In this paper a Model Predictive Control (MPC) logic, based on weather forecasts, has been applied to the analysis of power management in a domestic off-grid system. The system is laid out as the integration of renewable energy conversion devices (Photovoltaic, PV), a high efficiency energy conversion programmable system (a Fuel Cell, FC) and an electrochemical energy storage (batteries). The control strategy has the objective of minimizing energy costs, while maintaining the optimal environmental comfort in the house, thus optimizing the use of renewable sources. To that aim, a validated numerical model of the whole system has been developed, and simulations have been carried out for winter and summer periods. Performances attainable with a MPC-based logic have been evaluated in comparison with a standard Rule Based Control logic, by means of costs and efficiency parameters of the micro-grid. Temperature violations have been taken into account to represent the impact of the control on comfort. Results show an improvement of the house comfort conditions and a lower use (on average 14.5%) of primary fossil energy. This is due both to a reduction of required energy, and to an increased use of renewable energy sources. Moreover, the modulation of the HVAC load and of the FC operation gives a reduction of requested power by approximately 40%. Smoother battery pack charge and discharge processes are also obtained. As a main positive effect, a reduction of the FC powerplant size and an increase of its durability seems feasible, leading to an overall reduction of capital costs

  8. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  9. The Cognitive Processes Underlying Event-Based Prospective Memory In School Age Children and Young Adults: A Formal Model-Based Study

    OpenAIRE

    Smith, Rebekah E.; Bayen, Ute Johanna; Martin, Claudia

    2010-01-01

    Fifty 7-year-olds (29 female), 53 10-year-olds (29 female), and 36 young adults (19 female), performed a computerized event-based prospective memory task. All three groups differed significantly in prospective memory performance with adults showing the best performance and 7-year-olds the poorest performance. We used a formal multinomial process tree model of event-based prospective memory to decompose age differences in cognitive processes that jointly contribute to prospective memory perfor...

  10. Structured model Consumer-based Brand Equity based on Promotional-mix elements(Case Study: Food Active Industries of Tehran)

    OpenAIRE

    Mehran Rezvani; Siran Mehrnia

    2014-01-01

    Abstract This paper aims to examine the relationships among Promotional-mix elements and Customer-based Brand Equity. Then, a model is developed to examine the relationships Promotional-mix elements and Customer-based brand equity in Food Industries of Tehran. The sample size is 240. Data are collected by questionnaire designed. The collected data is estimated using Lizrel and SEM method. The test results show that four dimensions of brand equity (brand awareness, perceived quality, and br...

  11. A Novel Approach for Assessing the Performance of Sustainable Urbanization Based on Structural Equation Modeling: A China Case Study

    Directory of Open Access Journals (Sweden)

    Liudan Jiao

    2016-09-01

    Full Text Available The rapid urbanization process has brought problems to China, such as traffic congestion, air pollution, water pollution and resources scarcity. Sustainable urbanization is commonly appreciated as an effective way to promote the sustainable development. The proper understanding of the sustainable urbanization performance is critical to provide governments with support in making urban development strategies and policies for guiding the sustainable development. This paper utilizes the method of Structural equation modeling (SEM to establish an assessment model for measuring sustainable urbanization performance. Four unobserved endogenous variables, economic variable, social variable, environment variable and resource variable, and 21 observed endogenous variables comprise the SEM model. A case study of the 31 provinces in China demonstrates the validity of the SEM model and the analysis results indicated that the assessment model could help make more effective policies and strategies for improving urban sustainability by recognizing the statue of sustainable urbanization.

  12. A text-based data mining and toxicity prediction modeling system for a clinical decision support in radiation oncology: A preliminary study

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Chang, Kyung Hwan; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie

    2017-08-01

    The aim of this study is an integrated research for text-based data mining and toxicity prediction modeling system for clinical decision support system based on big data in radiation oncology as a preliminary research. The structured and unstructured data were prepared by treatment plans and the unstructured data were extracted by dose-volume data image pattern recognition of prostate cancer for research articles crawling through the internet. We modeled an artificial neural network to build a predictor model system for toxicity prediction of organs at risk. We used a text-based data mining approach to build the artificial neural network model for bladder and rectum complication predictions. The pattern recognition method was used to mine the unstructured toxicity data for dose-volume at the detection accuracy of 97.9%. The confusion matrix and training model of the neural network were achieved with 50 modeled plans (n = 50) for validation. The toxicity level was analyzed and the risk factors for 25% bladder, 50% bladder, 20% rectum, and 50% rectum were calculated by the artificial neural network algorithm. As a result, 32 plans could cause complication but 18 plans were designed as non-complication among 50 modeled plans. We integrated data mining and a toxicity modeling method for toxicity prediction using prostate cancer cases. It is shown that a preprocessing analysis using text-based data mining and prediction modeling can be expanded to personalized patient treatment decision support based on big data.

  13. GIS-based groundwater vulnerability modelling: A case study of the Witbank, Ermelo and Highveld Coalfields in South Africa

    Science.gov (United States)

    Sakala, E.; Fourie, F.; Gomo, M.; Coetzee, H.

    2018-01-01

    In the last 20 years, the popular mineral systems approach has been used successfully for the exploration of various mineral commodities at various scales owing to its scientific soundness, cost effectiveness and simplicity in mapping the critical processes required for the formation of deposits. In the present study this approach was modified for the assessment of groundwater vulnerability. In terms of the modified approach, water drives the pollution migration processes, with various analogies having been derived from the mineral systems approach. The modified approach is illustrated here by the discussion of a case study of acid mine drainage (AMD) pollution in the Witbank, Ermelo and Highveld coalfields of the Mpumalanga and KwaZulu-Natal Provinces in South Africa. Many AMD cases have been reported in these provinces in recent years and are a cause of concern for local municipalities, mining and environmental agencies. In the Witbank, Ermelo and Highveld coalfields, several areas have been mined out while mining has not yet started in others, hence the need to identify groundwater regions prone to AMD pollution in order to avoid further impacts on the groundwater resources. A knowledge-based fuzzy expert system was built using vulnerability factors (energy sources, ligands sources, pollutant sources, transportation pathways and traps) to generate a groundwater vulnerability model of the coalfields. Highly vulnerable areas were identified in Witbank coalfield and the eastern part of the Ermelo coalfield which are characterised by the presence of AMD sources, good subsurface transport coupled with poor AMD pollution trapping properties. The results from the analysis indicate significant correlations between model values and both groundwater sulphate concentrations as well as pH. This shows that the proposed approach can indeed be used as an alternative to traditional methods of groundwater vulnerability assessment. The methodology only considers the AMD pollution

  14. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  15. Studying the Representation Accuracy of the Earth's Gravity Field in the Polar Regions Based on the Global Geopotential Models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.

    2018-05-01

    The development of studies on estimating the accuracy of the Earth's modern global gravity models in terms of the spherical harmonics of the geopotential in the problematic regions of the world is discussed. The comparative analysis of the results of reconstructing quasi-geoid heights and gravity anomalies from the different models is carried out for two polar regions selected within a radius of 1000 km from the North and South poles. The analysis covers nine recently developed models, including six high-resolution models and three lower order models, including the Russian GAOP2012 model. It is shown that the modern models determine the quasi-geoid heights and gravity anomalies in the polar regions with errors of 5 to 10 to a few dozen cm and from 3 to 5 to a few dozen mGal, respectively, depending on the resolution. The accuracy of the models in the Arctic is several times higher than in the Antarctic. This is associated with the peculiarities of gravity anomalies in every particular region and with the fact that the polar part of the Antarctic has been comparatively less explored by the gravity methods than the polar Arctic.

  16. Geomorphology-based unit hydrograph models for flood risk management: case study in Brazilian watersheds with contrasting physiographic characteristics

    Directory of Open Access Journals (Sweden)

    SAMUEL BESKOW

    2018-05-01

    Full Text Available ABSTRACT Heavy rainfall in conjunction with an increase in population and intensification of agricultural activities have resulted in countless problems related to flooding in watersheds. Among the techniques available for direct surface runoff (DSR modeling and flood risk management are the Unit Hydrograph (UH and Instantaneous Unit Hydrograph (IUH. This study focuses on the evaluation of predictive capability of two conceptual IUH models (Nash and Clark, considering their original (NIUH and CIUH and geomorphological approaches (NIUHGEO and CIUHGEO, and their advantages over two traditional synthetics UH models - Triangular (TUH and Dimensionless (DUH, to estimate DSR hydrographs taking as reference two Brazilian watersheds with contrasting geomorphological and climatic characteristics. The main results and conclusions were: i there was an impact of the differences in physiographical characteristics between watersheds, especially those parameters associated with soil; the dominant rainfall patterns in each watershed had an influence on flood modeling; and ii CIUH was the most satisfactory model for both watersheds, followed by NIUH, and both models had substantial superiority over synthetic models traditionally employed; iii although geomorphological approaches for IUH had performances slightly better than TUH and DUH, they should not be considered as standard tools for flood modeling in these watersheds.

  17. A simplified physically-based breach model for a high concrete-faced rockfill dam: A case study

    OpenAIRE

    Qi-ming Zhong; Sheng-shui Chen; Zhao Deng

    2018-01-01

    A simplified physically-based model was developed to simulate the breaching process of the Gouhou concrete-faced rockfill dam (CFRD), which is the only breach case of a high CFRD in the world. Considering the dam height, a hydraulic method was chosen to simulate the initial scour position on the downstream slope, with the steepening of the downstream slope taken into account; a headcut erosion formula was adopted to simulate the backward erosion as well. The moment equilibrium method was util...

  18. Molecular dynamics simulation study of PTP1B with allosteric inhibitor and its application in receptor based pharmacophore modeling

    Science.gov (United States)

    Bharatham, Kavitha; Bharatham, Nagakumar; Kwon, Yong Jung; Lee, Keun Woo

    2008-12-01

    Allosteric inhibition of protein tyrosine phosphatase 1B (PTP1B), has paved a new path to design specific inhibitors for PTP1B, which is an important drug target for the treatment of type II diabetes and obesity. The PTP1B1-282-allosteric inhibitor complex crystal structure lacks α7 (287-298) and moreover there is no available 3D structure of PTP1B1-298 in open form. As the interaction between α7 and α6-α3 helices plays a crucial role in allosteric inhibition, α7 was modeled to the PTP1B1-282 in open form complexed with an allosteric inhibitor (compound-2) and a 5 ns MD simulation was performed to investigate the relative orientation of the α7-α6-α3 helices. The simulation conformational space was statistically sampled by clustering analyses. This approach was helpful to reveal certain clues on PTP1B allosteric inhibition. The simulation was also utilized in the generation of receptor based pharmacophore models to include the conformational flexibility of the protein-inhibitor complex. Three cluster representative structures of the highly populated clusters were selected for pharmacophore model generation. The three pharmacophore models were subsequently utilized for screening databases to retrieve molecules containing the features that complement the allosteric site. The retrieved hits were filtered based on certain drug-like properties and molecular docking simulations were performed in two different conformations of protein. Thus, performing MD simulation with α7 to investigate the changes at the allosteric site, then developing receptor based pharmacophore models and finally docking the retrieved hits into two distinct conformations will be a reliable methodology in identifying PTP1B allosteric inhibitors.

  19. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  20. Dual Transformer Model based on Standard Circuit Elements for the Study of Low- and Mid-frequency Transients

    Science.gov (United States)

    Jazebi, Saeed

    This thesis is a step forward toward achieving the final objective of creating a fully dual model for transformers including eddy currents and nonlinearities of the iron core using the fundamental electrical components already available in the EMTP-type programs. The model is effective for the study of the performance of transformers during power system transients. This is very important for transformer designers, because the insulation of transformers is determined with the overvoltages caused by lightning or switching operations. There are also internally induced transients that occur when a switch is actuated. For example switching actions for reconfiguration of distribution systems that offers economic advantages, or protective actions to clear faults and large short-circuit currents. Many of the smart grid concepts currently under development by many utilities rely heavily on switching to optimize resources that produce transients in the system. On the other hand, inrush currents produce mechanical forces which deform transformer windings and cause malfunction of the differential protection. Also, transformer performance under ferroresonance and geomagnetic induced currents are necessary to study. In this thesis, a physically consistent dual model applicable to single-phase two-winding transformers is proposed. First, the topology of a dual electrical equivalent circuit is obtained from the direct application of the principle of duality. Then, the model parameters are computed considering the variations of the transformer electromagnetic behavior under various operating conditions. Current modeling techniques use different topological models to represent diverse transient situations. The reversible model proposed in this thesis unifies the terminal and topological equivalent circuits. The model remains invariable for all low-frequency transients including deep saturation conditions driven from any of the two windings. The very high saturation region of the

  1. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    International Nuclear Information System (INIS)

    Qiu, Zeyang; Liang, Wei; Lin, Yang; Zhang, Meng; Wang, Xue

    2017-01-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor. (paper)

  2. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  3. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  4. Modeling the shear rate and pressure drop in a hydrodynamic cavitation reactor with experimental validation based on KI decomposition studies.

    Science.gov (United States)

    Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente

    2015-01-01

    A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The Basic Immune Simulator: An agent-based model to study the interactions between innate and adaptive immunity

    Directory of Open Access Journals (Sweden)

    Orosz Charles G

    2007-09-01

    Full Text Available Abstract Background We introduce the Basic Immune Simulator (BIS, an agent-based model created to study the interactions between the cells of the innate and adaptive immune system. Innate immunity, the initial host response to a pathogen, generally precedes adaptive immunity, which generates immune memory for an antigen. The BIS simulates basic cell types, mediators and antibodies, and consists of three virtual spaces representing parenchymal tissue, secondary lymphoid tissue and the lymphatic/humoral circulation. The BIS includes a Graphical User Interface (GUI to facilitate its use as an educational and research tool. Results The BIS was used to qualitatively examine the innate and adaptive interactions of the immune response to a viral infection. Calibration was accomplished via a parameter sweep of initial agent population size, and comparison of simulation patterns to those reported in the basic science literature. The BIS demonstrated that the degree of the initial innate response was a crucial determinant for an appropriate adaptive response. Deficiency or excess in innate immunity resulted in excessive proliferation of adaptive immune cells. Deficiency in any of the immune system components increased the probability of failure to clear the simulated viral infection. Conclusion The behavior of the BIS matches both normal and pathological behavior patterns in a generic viral infection scenario. Thus, the BIS effectively translates mechanistic cellular and molecular knowledge regarding the innate and adaptive immune response and reproduces the immune system's complex behavioral patterns. The BIS can be used both as an educational tool to demonstrate the emergence of these patterns and as a research tool to systematically identify potential targets for more effective treatment strategies for diseases processes including hypersensitivity reactions (allergies, asthma, autoimmunity and cancer. We believe that the BIS can be a useful addition to

  6. Will building new reservoirs always help increase the water supply reliability? - insight from a modeling-based global study

    Science.gov (United States)

    Zhuang, Y.; Tian, F.; Yigzaw, W.; Hejazi, M. I.; Li, H. Y.; Turner, S. W. D.; Vernon, C. R.

    2017-12-01

    More and more reservoirs are being build or planned in order to help meet the increasing water demand all over the world. However, is building new reservoirs always helpful to water supply? To address this question, the river routing module of Global Change Assessment Model (GCAM) has been extended with a simple yet physical-based reservoir scheme accounting for irrigation, flood control and hydropower operations at each individual reservoir. The new GCAM river routing model has been applied over the global domain with the runoff inputs from the Variable Infiltration Capacity Model. The simulated streamflow is validated at 150 global river basins where the observed streamflow data are available. The model performance has been significantly improved at 77 basins and worsened at 35 basins. To facilitate the analysis of additional reservoir storage impacts at the basin level, a lumped version of GCAM reservoir model has been developed, representing a single lumped reservoir at each river basin which has the regulation capacity of all reservoir combined. A Sequent Peak Analysis is used to estimate how much additional reservoir storage is required to satisfy the current water demand. For basins with water deficit, the water supply reliability can be improved with additional storage. However, there is a threshold storage value at each basin beyond which the reliability stops increasing, suggesting that building new reservoirs will not help better relieve the water stress. Findings in the research can be helpful to the future planning and management of new reservoirs.

  7. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    Science.gov (United States)

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study

  8. A comparative Study between GoldSim and AMBER Based Biosphere Assessment Models for an HLW Repository

    International Nuclear Information System (INIS)

    Lee, Youn-Myoung; Hwang, Yong-Soo

    2007-01-01

    To demonstrate the performance of a repository, the dose exposure rate to human being due to long-term nuclide releases from a high-level waste repository (HLW) should be evaluated and the results compared to the dose limit presented by the regulatory bodies. To evaluate such a dose rate to an individual, biosphere assessment models have been developed and implemented for a practical calculation with the aid of such commercial tools as AMBER and GoldSim, both of which are capable of probabilistic and deterministic calculation. AMBER is a general purpose compartment modeling tool and GoldSim is another multipurpose simulation tool for dynamically modeling complex systems, supporting a higher graphical user interface than AMBER and a postprocessing feature. And also unlike AMBER, any kind of compartment scheme can be rather simply constructed with an appropriate transition rate between compartments, GoldSim is designed to facilitate the object-oriented modules to address any specialized programs, similar to solving jig saw puzzles. During the last couple of years a compartment modeling approach for a biosphere has been mainly carried out with AMBER in KAERI in order to conservatively or rather roughly provide dose conversion factors to get the final exposure rate due to a nuclide flux into biosphere over various geosphere-biosphere interfaces (GBIs) calculated through nuclide transport modules. This caused a necessity for a newly devised biosphere model that could be coupled to a nuclide transport model with less conservatism in the frame of the development of a total system performance assessment modeling tool, which could be successfully done with the aid of GoldSim. Therefore, through the current study, some comparison results of the AMBER and the GoldSim approaches for the same case of a biosphere modeling without any consideration of geosphere transport are introduced by extending a previous study

  9. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  10. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  11. Trajectories of Heroin Addiction: Growth Mixture Modeling Results Based on a 33-Year Follow-Up Study

    Science.gov (United States)

    Hser, Yih-Ing; Huang, David; Chou, Chih-Ping; Anglin, M. Douglas

    2007-01-01

    This study investigates trajectories of heroin use and subsequent consequences in a sample of 471 male heroin addicts who were admitted to the California Civil Addict Program in 1964-1965 and followed over 33 years. Applying a two-part growth mixture modeling strategy to heroin use level during the first 16 years of the addiction careers since…

  12. A Disability and Health Institutional Research Capacity Building and Infrastructure Model Evaluation: A Tribal College-Based Case Study

    Science.gov (United States)

    Moore, Corey L.; Manyibe, Edward O.; Sanders, Perry; Aref, Fariborz; Washington, Andre L.; Robertson, Cherjuan Y.

    2017-01-01

    Purpose: The purpose of this multimethod study was to evaluate the institutional research capacity building and infrastructure model (IRCBIM), an emerging innovative and integrated approach designed to build, strengthen, and sustain adequate disability and health research capacity (i.e., research infrastructure and investigators' research skills)…

  13. The interplay between subduction and lateral extrusion : A case study for the European Eastern Alps based on analogue models

    NARCIS (Netherlands)

    van Gelder, I. E.; Willingshofer, E.; Sokoutis, D.; Cloetingh, S. A.P.L.

    2017-01-01

    A series of analogue experiments simulating intra-continental subduction contemporaneous with lateral extrusion of the upper plate are performed to study the interference between these two processes at crustal levels and in the lithospheric mantle. The models demonstrate that intra-continental

  14. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  15. Developing a suitable model for supplier selection based on supply chain risks: an empirical study from Iranian pharmaceutical companies.

    Science.gov (United States)

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts' opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry.

  16. Study on unsteady tip leakage vortex cavitation in an axial-flow pump using an improved filter-based model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Desheng; Shi, Lei; Zhao, Ruijie; Shi, Weidong; Pan, Qiang [Jiangsu University, Zhenjiang (China); Esch, B. P. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2017-02-15

    The aim of the present investigation is to simulate and analyze the tip leakage flow structure and instantaneous evolution of tip vortex cavitation in a scaled axial-flow pump model. The improved filter-based turbulence model based on the density correction and a homogeneous cavitation model were used for implementing this work. The results show that when entering into the tip clearance, the backward flow separates from the blade tip near the pressure side, resulting in the generation of a corner vortex with high magnitude of turbulence kinetic energy. Then, at the exit of the tip clearance, the leakage jets would re-attach on the blade tip wall. Moreover, the maximum swirling strength method was employed in identifying the TLV core and a counter-rotating induced vortex near the end-wall successfully. The three dimensional cavitation patterns and in-plain cavitation structures obtained by the improved numerical method agree well with the experimental results. At the sheet cavitation trailing edge in the tip region, the perpendicular cavitation cloud induced by TLV sheds and migrates toward the pressure side of the neighboring blade. During its migration, it breaks down abruptly and generates a large number of smallscale cavities, leading to severe degradation of the pump performance, which is similar with the phenomenon observed by Tan et al.

  17. Designing an Agent-Based Model for Childhood Obesity Interventions: A Case Study of ChildObesity180.

    Science.gov (United States)

    Hennessy, Erin; Ornstein, Joseph T; Economos, Christina D; Herzog, Julia Bloom; Lynskey, Vanessa; Coffield, Edward; Hammond, Ross A

    2016-01-07

    Complex systems modeling can provide useful insights when designing and anticipating the impact of public health interventions. We developed an agent-based, or individual-based, computation model (ABM) to aid in evaluating and refining implementation of behavior change interventions designed to increase physical activity and healthy eating and reduce unnecessary weight gain among school-aged children. The potential benefits of applying an ABM approach include estimating outcomes despite data gaps, anticipating impact among different populations or scenarios, and exploring how to expand or modify an intervention. The practical challenges inherent in implementing such an approach include data resources, data availability, and the skills and knowledge of ABM among the public health obesity intervention community. The aim of this article was to provide a step-by-step guide on how to develop an ABM to evaluate multifaceted interventions on childhood obesity prevention in multiple settings. We used data from 2 obesity prevention initiatives and public-use resources. The details and goals of the interventions, overview of the model design process, and generalizability of this approach for future interventions is discussed.

  18. Developing a Suitable Model for Supplier Selection Based on Supply Chain Risks: An Empirical Study from Iranian Pharmaceutical Companies

    Science.gov (United States)

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts’ opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry. PMID:24250442

  19. Indicator-based model to assess vulnerability to landslides in urban areas. Case study of Husi city (Eastern Romania)

    Science.gov (United States)

    Grozavu, Adrian; Ciprian Margarint, Mihai; Catalin Stanga, Iulian

    2013-04-01

    In the last three or four decades, vulnerability evolved from physical fragility meanings to a more complex concept, being a key element of risk assessment. In landslide risk assessment, there are a large series of studies regarding landslide hazard, but far fewer researches focusing on vulnerability measurement. Furthermore, there is still no unitary understanding on the methodological framework, neither any internationally agreed standard for landslide vulnerability measurements. The omnipresent common element is the existence of elements at risk, but while some approaches are limited to exposure, other focus on the degree of losses (human injuries, material damages and monetary losses, structural dysfunctions etc.). These losses are differently assessed using both absolute and relative values on qualitative or quantitative scales and they are differently integrated to provide a final vulnerability value. This study aims to assess vulnerability to landslides at local level using an indicator-based model applied to urban areas and tested for Husi town (Eastern Romania). The study region is characterized by permeable and impermeable alternating sedimentary rocks, monoclinal geological structure and hilly relief with impressive cuestas, continental temperate climate, and precipitation of about 500 mm/year, rising to 700 m and even more in some rainy years. The town is a middle size one (25000 inhabitants) and it had an ascending evolution in the last centuries, followed by an increasing human pressure on lands. Methodologically, the first step was to assess the landslide susceptibility and to identify in this way those regions within which any asset would be exposed to landslide hazards. Landslide susceptibility was assessed using the logistic regression approach, taking into account several quantitative and qualitative factors (elements of geology, morphometry, rainfall, land use etc.). The spatial background consisted in the Digital Elevation Model and all derived

  20. Effectiveness of integrated care model for type 2 diabetes: A population-based study in Reggio Emilia (Italy).

    Science.gov (United States)

    Ballotari, Paola; Venturelli, Francesco; Manicardi, Valeria; Ferrari, Francesca; Vicentini, Massimo; Greci, Marina; Pignatti, Fabio; Storani, Simone; Giorgi Rossi, Paolo

    2018-01-01

    To compare the effectiveness of integrated care with that of the diabetes clinic care model in terms of mortality and hospitalisation of type 2 diabetes patients with low risk of complications. Out of 27234 people with type 2 diabetes residing in the province of Reggio Emilia on 31/12/2011, 3071 were included in this cohort study as eligible for integrated care (i.e., low risk of complications) and cared for with the same care model for at least two years. These patients were followed up from 2012 to 2016, for all-cause and diabetes-related mortality and hospital admissions. We performed a Poisson regression model, using the proportion of eligible patients included in the integrated care model for each general practitioner as an instrumental variable. 1700 patients were cared for by integrated care and 1371 by diabetes clinics. Mortality rate ratios were 0.83 (95%CI 0.60-1.13) and 0.95 (95%CI 0.54-1.70) for all-cause and cardiovascular mortality, respectively, and incidence rate ratios were 0.90 (95%CI 0.76-1.06) and 0.91 (95%CI 0.69-1.20) for all-cause and cardiovascular disease hospitalisation, respectively. For low risk patients with type 2 diabetes, the integrated care model involving both general practitioner and diabetes clinic professionals showed similar mortality and hospitalisation as a model with higher use of specialized care in an exclusively diabetes clinic setting.

  1. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study.

    Directory of Open Access Journals (Sweden)

    Alessandro Cassini

    2016-10-01

    associated with the highest burden because of their high severity. The cumulative burden of the six HAIs was higher than the total burden of all other 32 communicable diseases included in the BCoDE 2009-2013 study. The main limitations of the study are the variability in the parameter estimates, in particular the disease models' case fatalities, and the use of the Rhame and Sudderth formula for estimating incident number of cases from prevalence data.We estimated the EU/EEA burden of HAIs in DALYs in 2011-2012 using a transparent and evidence-based approach that allows for combining estimates of morbidity and of mortality in order to compare with other diseases and to inform a comprehensive ranking suitable for prioritization. Our results highlight the high burden of HAIs and the need for increased efforts for their prevention and control. Furthermore, our model should allow for estimations of the potential benefit of preventive measures on the burden of HAIs in the EU/EEA.

  2. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  3. Small contribution of gold mines to the ongoing tuberculosis epidemic in South Africa: a modeling-based study.

    Science.gov (United States)

    Chang, Stewart T; Chihota, Violet N; Fielding, Katherine L; Grant, Alison D; Houben, Rein M; White, Richard G; Churchyard, Gavin J; Eckhoff, Philip A; Wagner, Bradley G

    2018-04-12

    Gold mines represent a potential hotspot for Mycobacterium tuberculosis (Mtb) transmission and may be exacerbating the tuberculosis (TB) epidemic in South Africa. However, the presence of multiple factors complicates estimation of the mining contribution to the TB burden in South Africa. We developed two models of TB in South Africa, a static risk model and an individual-based model that accounts for longer-term trends. Both models account for four populations - mine workers, peri-mining residents, labor-sending residents, and other residents of South Africa - including the size and prevalence of latent TB infection, active TB, and HIV of each population and mixing between populations. We calibrated to mine- and country-level data and used the static model to estimate force of infection (FOI) and new infections attributable to local residents in each community compared to other residents. Using the individual-based model, we simulated a counterfactual scenario to estimate the fraction of overall TB incidence in South Africa attributable to recent transmission in mines. We estimated that the majority of FOI in each community is attributable to local residents: 93.9% (95% confidence interval 92.4-95.1%), 91.5% (91.4-91.5%), and 94.7% (94.7-94.7%) in gold mining, peri-mining, and labor-sending communities, respectively. Assuming a higher rate of Mtb transmission in mines, 4.1% (2.6-5.8%), 5.0% (4.5-5.5%), and 9.0% (8.8-9.1%) of new infections in South Africa are attributable to gold mine workers, peri-mining residents, and labor-sending residents, respectively. Therefore, mine workers with TB disease, who constitute ~ 2.5% of the prevalent TB cases in South Africa, contribute 1.62 (1.04-2.30) times as many new infections as TB cases in South Africa on average. By modeling TB on a longer time scale, we estimate 63.0% (58.5-67.7%) of incident TB disease in gold mining communities to be attributable to recent transmission, of which 92.5% (92.1-92.9%) is attributable to

  4. Identifying appropriate reference data models for comparative effectiveness research (CER) studies based on data from clinical information systems.

    Science.gov (United States)

    Ogunyemi, Omolola I; Meeker, Daniella; Kim, Hyeon-Eui; Ashish, Naveen; Farzaneh, Seena; Boxwala, Aziz

    2013-08-01

    The need for a common format for electronic exchange of clinical data prompted federal endorsement of applicable standards. However, despite obvious similarities, a consensus standard has not yet been selected in the comparative effectiveness research (CER) community. Using qualitative metrics for data retrieval and information loss across a variety of CER topic areas, we compare several existing models from a representative sample of organizations associated with clinical research: the Observational Medical Outcomes Partnership (OMOP), Biomedical Research Integrated Domain Group, the Clinical Data Interchange Standards Consortium, and the US Food and Drug Administration. While the models examined captured a majority of the data elements that are useful for CER studies, data elements related to insurance benefit design and plans were most detailed in OMOP's CDM version 4.0. Standardized vocabularies that facilitate semantic interoperability were included in the OMOP and US Food and Drug Administration Mini-Sentinel data models, but are left to the discretion of the end-user in Biomedical Research Integrated Domain Group and Analysis Data Model, limiting reuse opportunities. Among the challenges we encountered was the need to model data specific to a local setting. This was handled by extending the standard data models. We found that the Common Data Model from the OMOP met the broadest complement of CER objectives. Minimal information loss occurred in mapping data from institution-specific data warehouses onto the data models from the standards we assessed. However, to support certain scenarios, we found a need to enhance existing data dictionaries with local, institution-specific information.

  5. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  6. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  7. Research on Scenic Spot’s Sustainable Development Based on a SD Model: A Case Study of the Jiuzhai Valley

    Directory of Open Access Journals (Sweden)

    Zhixue Liao

    2014-07-01

    Full Text Available In the field of tourism, the development of tourist attractions is playing an increasingly crucial role in tourism economy, regional economy and national economy. However, the eco-environment has been damaged while tourism industry develops rapidly. Thus, to solve the contradiction between tourism development and eco-environment protection is the key to achieving sustainable development of tourism. This paper builds a SD model, which is based on the analysis of the economic subsystem and environment subsystem, to promote sustainable development. In order to show the effectiveness of the model, Jiuzhai Valley is taken as the research object and a decisive basis is provided for the path adjustment of sustainable development in tourist scenic.

  8. Modeling Typhoon Event-Induced Landslides Using GIS-Based Logistic Regression: A Case Study of Alishan Forestry Railway, Taiwan

    Directory of Open Access Journals (Sweden)

    Sheng-Chuan Chen

    2013-01-01

    Full Text Available This study develops a model for evaluating the hazard level of landslides at Alishan Forestry Railway, Taiwan, by using logistic regression with the assistance of a geographical information system (GIS. A typhoon event-induced landslide inventory, independent variables, and a triggering factor were used to build the model. The environmental factors such as bedrock lithology from the geology database; topographic aspect, terrain roughness, profile curvature, and distance to river, from the topographic database; and the vegetation index value from SPOT 4 satellite images were used as variables that influence landslide occurrence. The area under curve (AUC of a receiver operator characteristic (ROC curve was used to validate the model. Effects of parameters on landslide occurrence were assessed from the corresponding coefficient that appears in the logistic regression function. Thereafter, the model was applied to predict the probability of landslides for rainfall data of different return periods. Using a predicted map of probability, the study area was classified into four ranks of landslide susceptibility: low, medium, high, and very high. As a result, most high susceptibility areas are located on the western portion of the study area. Several train stations and railways are located on sites with a high susceptibility ranking.

  9. Energy and environment efficiency analysis based on an improved environment DEA cross-model: Case study of complex chemical processes

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Dong, JunGen; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Highlights: •An improved environment DEA cross-model method is proposed. •Energy and environment efficiency analysis framework of complex chemical processes is obtained. •This proposed method is efficient in energy-saving and emission reduction of complex chemical processes. -- Abstract: The complex chemical process is a high pollution and high energy consumption industrial process. Therefore, it is very important to analyze and evaluate the energy and environment efficiency of the complex chemical process. Data Envelopment Analysis (DEA) is used to evaluate the relative effectiveness of decision-making units (DMUs). However, the traditional DEA method usually cannot genuinely distinguish the effective and inefficient DMU due to its extreme or unreasonable weight distribution of input and output variables. Therefore, this paper proposes an energy and environment efficiency analysis method based on an improved environment DEA cross-model (DEACM) method. The inputs of the complex chemical process are divided into energy and non-energy inputs. Meanwhile, the outputs are divided into desirable and undesirable outputs. And then the energy and environment performance index (EEPI) based on the cross evaluation is used to represent the overall performance of each DMU. Moreover, the improvement direction of energy-saving and carbon emission reduction of each inefficiency DMU is quantitatively obtained based on the self-evaluation model of the improved environment DEACM. The results show that the improved environment DEACM method has a better effective discrimination than the original DEA method by analyzing the energy and environment efficiency of the ethylene production process in complex chemical processes, and it can obtain the potential of energy-saving and carbon emission reduction of ethylene plants, especially the improvement direction of inefficient DMUs to improve energy efficiency and reduce carbon emission.

  10. Entrainment and Control of Bacterial Populations: An in Silico Study over a Spatially Extended Agent Based Model.

    Science.gov (United States)

    Mina, Petros; Tsaneva-Atanasova, Krasimira; Bernardo, Mario di

    2016-07-15

    We extend a spatially explicit agent based model (ABM) developed previously to investigate entrainment and control of the emergent behavior of a population of synchronized oscillating cells in a microfluidic chamber. Unlike most of the work in models of control of cellular systems which focus on temporal changes, we model individual cells with spatial dependencies which may contribute to certain behavioral responses. We use the model to investigate the response of both open loop and closed loop strategies, such as proportional control (P-control), proportional-integral control (PI-control) and proportional-integral-derivative control (PID-control), to heterogeinities and growth in the cell population, variations of the control parameters and spatial effects such as diffusion in the spatially explicit setting of a microfluidic chamber setup. We show that, as expected from the theory of phase locking in dynamical systems, open loop control can only entrain the cell population in a subset of forcing periods, with a wide variety of dynamical behaviors obtained outside these regions of entrainment. Closed-loop control is shown instead to guarantee entrainment in a much wider region of control parameter space although presenting limitations when the population size increases over a certain threshold. In silico tracking experiments are also performed to validate the ability of classical control approaches to achieve other reference behaviors such as a desired constant output or a linearly varying one. All simulations are carried out in BSim, an advanced agent-based simulator of microbial population which is here extended ad hoc to include the effects of control strategies acting onto the population.

  11. A Numerical Study of Forbush Decreases with a 3D Cosmic-Ray Modulation Model Based on an SDE Approach

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xi; Feng, Xueshang [SIGMA Weather Group, State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing 100190 (China); Potgieter, Marius S. [Centre for Space Research, North-West University, Potchefstroom 2520 (South Africa); Zhang, Ming [Department of Physics and Space Sciences, Florida Institute of Technology, 150 West University Boulevard, Melbourne, FL 32901 (United States)

    2017-04-10

    Based on the reduced diffusion mechanism for producing Forbush decreases (Fds) in the heliosphere, we constructed a three-dimensional (3D) diffusion barrier, and by incorporating it into a stochastic differential equation (SDE) based time-dependent, cosmic-ray transport model, a 3D numerical model for simulating Fds is built and applied to a period of relatively quiet solar activity. This SDE model generally corroborates previous Fd simulations concerning the effects of the solar magnetic polarity, the tilt angle of the heliospheric current sheet (HCS), and cosmic-ray particle energy. Because the modulation processes in this 3D model are multi-directional, the barrier’s geometrical features affect the intensity profiles of Fds differently. We find that both the latitudinal and longitudinal extent of the barrier have relatively fewer effects on these profiles than its radial extent and the level of decreased diffusion inside the disturbance. We find, with the 3D approach, that the HCS rotational motion causes the relative location from the observation point to the HCS to vary, so that a periodic pattern appears in the cosmic-ray intensity at the observing location. Correspondingly, the magnitude and recovery time of an Fd change, and the recovering intensity profile contains oscillation as well. Investigating the Fd magnitude variation with heliocentric radial distance, we find that the magnitude decreases overall and, additionally, that the Fd magnitude exhibits an oscillating pattern as the radial distance increases, which coincides well with the wavy profile of the HCS under quiet solar modulation conditions.

  12. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    Science.gov (United States)

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  13. A Study of the Role of Decisional Balance in Exercise Status Among Yazd’s Staff Based on Transtheoretical Model

    Directory of Open Access Journals (Sweden)

    Mazloomy Mahmoudabad Saeid

    2009-06-01

    Full Text Available Background: According to statistical evidence put forward by WHO in 2003, lack of physical activity is the reason for 1.9 million deaths in the world. More than 60% of adults in the world and more than 80% of Iranian adults do not perform sufficient levels of physical activity. Despite the great advantages of exercise a huge portion of the population of many countries do not exercise adequately thus deprived of its benefits. Transtheoretical model is identified as a comprehensive model for behavior change and decision balance is regarded as a fundamental structure of the model which centers around positive and negative behavior change consequences. In this study decision was made to check the situation of sport change process in Yazd’s staff according to Transtheoretical Model and its relation with decision balance.Materials and Methods: In this cross-sectional study, 220 subjects were recruited. The subjects were selected through 2-stage cluster sampling-test; the instruments for data collection were a questionnaire that consisted of two parts (demographic variables and construct of TTM. Data analyzed by ANOVA and T-test through SPSS and P<0.05 was significant.Results: Results of the study on 152 males (69.1% and 68 females (30.9% with the average age of 34±8.68 indicated a significant relationship between pros and cons with stages of change (both P<0.0001 and between pros with age (P=0.004. Of 220 study group 44(20% were in pre-contemplation stage, 88 (40% in contemplation, 30(13.6% in preparation, 16(7.3% in action and 42(19.1% in maintenance stage.Conclusion: With regard to the fact that transtheoretical model has revealed a significant relationship between decisional balance and stage of change in exercise. It is proposed that accomplish educational class for employees, in order to increase pros and reduce cons of exercise.

  14. Studying psychosocial adaptation to end-stage renal disease: the proximal-distal model of health-related outcomes as a base model.

    Science.gov (United States)

    Chan, Ramony; Brooks, Robert; Erlich, Jonathan; Gallagher, Martin; Snelling, Paul; Chow, Josephine; Suranyi, Michael

    2011-05-01

    Studying psychosocial adaptation in end-stage renal disease (ESRD) is increasingly important, as it may explain the variability in health outcomes unaccounted for by clinical factors. The Brenner et al. proximal-distal model of health-related outcomes provides a theoretical foundation for understanding psychosocial adaptation and integrating health outcomes, clinical, and psychosocial factors (Brenner MH, Curbow B, Legro MW. The proximal-distal continuum of multiple health outcome measures: the case of cataract surgery. Med Care. 1995;33(4 Suppl):AS236-44). This study aims to empirically validate the proximal-distal model in the dialysis population and examine the impact of psychosocial factors on the model. A cross-sectional observational study was conducted with a sample of long-term dialysis patients (n=201). Eleven factors: quality of life (QoL), depression, positive affect, comorbidity, symptoms, physical functioning, disease accommodation, loss, self-efficacy, illness acceptance, and social support were measured by standardized psychometric scales. A three-month average of hemoglobin was used. Latent composite structural equation modeling was used to examine the models. The proximal-distal model with slight modification was supported by fit statistics [χ(2)=16.04, df=13, P=.25, root mean square error of approximation (RMSEA)=0.024], indicating that the impact of clinical factors on QoL is mediated through a range of functional and psychological factors, except for hemoglobin which impacts directly on QoL. The model with additional psychosocial factors was also supported by fit statistics (χ(2)=43.59, df=41, P=.36, RMSEA=0.018). These additional factors mainly impact on symptom status, psychological states, and QoL components of the model. The present study supported the proximal-distal model in the dialysis population and demonstrated the considerable impact of psychosocial factors on the model. The proximal-distal model plus psychosocial factors as a

  15. A Southern Ocean variability study using the Argo-based Model for Investigation of the Global Ocean (AMIGO)

    Science.gov (United States)

    Lebedev, Konstantin

    2017-04-01

    The era of satellite observations of the ocean surface that started at the end of the 20th century and the development of the Argo project in the first years of the 21st century, designed to collect information of the upper 2000 m of the ocean using satellites, provides unique opportunities for continuous monitoring of the Global Ocean state. Starting from 2005, measurements with the Argo floats have been performed over the majority of the World Ocean. In November 2007, the Argo program reached coverage of 3000 simultaneously operating floats (one float in a three-degree square) planned during the development of the program. Currently, 4000 Argo floats autonomously profile the upper 2000-m water column of the ocean from Antarctica to Spitsbergen increasing World Ocean temperature and salinity databases by 12000 profiles per month. This makes it possible to solve problems on reconstructing and monitoring the ocean state on an almost real-time basis, study the ocean dynamics, obtain reasonable estimates of the climatic state of the ocean in the last decade and estimate existing intraclimatic trends. We present the newly developed Argo-Based Model for Investigation of the Global Ocean (AMIGO), which consists of a block for variational interpolation of the profiles of drifting Argo floats to a regular grid and a block for model hydrodynamic adjustment of variationally interpolated fields. Such a method makes it possible to obtain a full set of oceanographic characteristics - temperature, salinity, density, and current velocity - using irregularly located Argo measurements (the principle of the variational interpolation technique entails minimization of the misfit between the interpolated fields defined on the regular grid and irregularly distributed data; hence the optimal solution passes as close to the data as possible). The simulations were performed for the entire globe limited in the north by 85.5° N using 1° grid spacing in both longitude and latitude. At the

  16. Lithologic Effects on Landscape Response to Base Level Changes: A Modeling Study in the Context of the Eastern Jura Mountains, Switzerland

    Science.gov (United States)

    Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.

    2017-11-01

    Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.

  17. Design Thinking and Cloud Manufacturing: A Study of Cloud Model Sharing Platform Based on Separated Data Log

    Directory of Open Access Journals (Sweden)

    Zhe Wei

    2013-01-01

    Full Text Available To solve the product data consistency problem which is caused by the portable system that cannot conduct real-time update of product data in mobile environment under the mass customization production mode, a new product data optimistic replication method based on log is presented. This paper focuses on the design thinking provider, probing into a manufacturing resource design thinking cloud platform based on manufacturing resource-locating technologies, and also discuss several application scenarios of cloud locating technologies in the manufacturing environment. The actual demand of manufacturing creates a new mode which is service-oriented and has high efficiency and low consumption. Finally, they are different from the crowd-sourcing application model of Local-Motors. The sharing platform operator is responsible for a master plan for the platform, proposing a open interface standard and establishing a service operation mode.

  18. Optimal model-based deficit irrigation scheduling using AquaCrop: a simulation study with cotton, potato and tomato

    DEFF Research Database (Denmark)

    Linker, Raphael; Ioslovich, Ilya; Sylaios, Georgios

    2016-01-01

    -smooth behavior of the objective function and the fact that it involves multiple integer variables. We developed an optimization scheme for generating sub-optimal irrigation schedules that take implicitly into account the response of the crop to water stress, and used these as initial guesses for a full......Water shortage is the main limiting factor for agricultural productivity in many countries and improving water use efficiency in agriculture has been the focus of numerous studies. The usual approach to limit water consumption in agriculture is to apply water quotas and in such a situation farmers...... variables are the irrigation amounts for each day of the season. The objective function is the expected yield calculated with the use of a model. In the present work we solved this optimization problem for three crops modeled by the model AquaCrop. This optimization problem is non-trivial due to the non...

  19. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances.A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  20. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    Science.gov (United States)

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  1. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  2. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    Science.gov (United States)

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  3. A case study on the historical peninsula of Istanbul based on three-dimensional modeling by using photogrammetry and terrestrial laser scanning.

    Science.gov (United States)

    Ergun, Bahadir; Sahin, Cumhur; Baz, Ibrahim; Ustuntas, Taner

    2010-06-01

    Terrestrial laser scanning is a popular methodology that is used frequently in the process of documenting historical buildings and cultural heritage. The historical peninsula region sprawls over an area of approximately 1,500 ha and is one of the main aggregate areas of the historical buildings in Istanbul. In this study, terrestrial laser scanning and close range photogrammetry techniques are integrated into each other to create a 3D city model of this part of Istanbul, including some of the buildings that represent the most brilliant areas of Byzantine and Ottoman Empires. Several terrestrial laser scanners with their different specifications were used to solve various geometric scanning problems for distinct areas of the subject city. Photogrammetric method was used for the documentation of the façades of these historical buildings for architectural purposes. This study differentiates itself from the similar ones by its application process that focuses on the geometry, the building texture, and density of the study area. Nowadays, the largest-scale studies among 3D modeling studies, in terms of the methodology of measurement, are urban modeling studies. Because of this large scale, the application of 3D urban modeling studies is executed in a gradual way. In this study, a modeling method based on the façades of the streets was used. In addition, the complimentary elements for the process of modeling were combined in several ways. A street model was presented as a sample, as being the subject of the applied study. In our application of 3D modeling, the modeling based on close range photogrammetry and the data of combined calibration with the data of terrestrial laser scanner were used in a compatible way. The final work was formed with the pedestal data for 3D visualization.

  4. Network Structures in a Society Composed of Individuals with Utilities DependingStudy of Object-Oriented Model for the Knowledge Base System

    OpenAIRE

    Mingwei, Zhao; Yanzhong, Dang

    2005-01-01

    Based on the analysis of object-oriented model, knowledge base and knowledge base system by using theories on object-oriented and knowledge base, the relationships between object-oriented model and knowledge base are discussed in this paper. The architecture of object-oriented knowledge system is proposed and the Rule-Case-Based Reasoning knowledge base system is designed.

  5. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers

    Directory of Open Access Journals (Sweden)

    Sperlich Stefanie

    2012-01-01

    Full Text Available Abstract Background This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI in unpaid household and family work. Methods: Using a cross-sectional population-based survey of German mothers (n = 3129 the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA. Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. Results CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. Conclusions The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work.

  6. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers.

    Science.gov (United States)

    Sperlich, Stefanie; Peter, Richard; Geyer, Siegfried

    2012-01-06

    This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI) in unpaid household and family work. Using a cross-sectional population-based survey of German mothers (n = 3129) the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA). Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren)'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work.

  7. DEVELOPMENT OF A SPREADSHEET BASED VENDOR MANAGED INVENTORY MODEL FOR A SINGLE ECHELON SUPPLY CHAIN: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Karanam Prahlada Rao

    2010-11-01

    Full Text Available Vendor managed inventory (VMI is a supply chain initiative where the supplier assumes the responsibility for managing inventories using advanced communication means such as online messaging and data retrieval system. A well collaborated vendor manage inventory system can improve supply chain performance by decreasing the inventory level and increasing the fill rate. This paper investigates the implementation of vendor managed inventory systems in a consumer goods industry. We consider (r, Q policy for replenishing its inventory. The objective of work is to minimize the inventory across the supply chain and maximize the service level. The major contribution of this work is to develop a spreadsheet model for VMI system, Evaluation of Total inventory cost by using spreadsheet based method and Analytical method, Quantifying inventory reduction, Estimating service efficiency level, and validating the VMI spread sheet model with randomly generated demand. In the application, VMI as an inventory control system is able to reduce the inventory cost without sacrificing the service level. The results further more show that the inventory reduction obtained from analytical method is closer to the spread sheet based approach, which reveals the VMI success. However the VMI success is impacted by the quality of buyersupplier relationships, the quality of the IT system and the intensity of information sharing, but not by the quality of information shared.

  8. An Integrated model for Product Quality Development—A case study on Quality functions deployment and AHP based approach

    Science.gov (United States)

    Maitra, Subrata; Banerjee, Debamalya

    2010-10-01

    Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.

  9. Studies of acid-base homeostasis during simulated weightlessness: Application of the water immersion model to man

    Science.gov (United States)

    Epstein, M.

    1975-01-01

    The effects of water immersion on acid-base homeostasis were investigated under carefully controlled conditions. Studies of renal acidification were carried out on seven healthy male subjects, each consuming a diet containing 150 meq sodium and 100 meq potassium. Control and immersion studies were carried out on each subject on the fourth and sixth days, respectively, of dietary equilibration, by which time all subjects had achieved sodium balance. The experimental protocols on study days were similar (except for the amount of water administered).

  10. [The survival prediction model of advanced gallbladder cancer based on Bayesian network: a multi-institutional study].

    Science.gov (United States)

    Tang, Z H; Geng, Z M; Chen, C; Si, S B; Cai, Z Q; Song, T Q; Gong, P; Jiang, L; Qiu, Y H; He, Y; Zhai, W L; Li, S P; Zhang, Y C; Yang, Y

    2018-05-01

    Objective: To investigate the clinical value of Bayesian network in predicting survival of patients with advanced gallbladder cancer(GBC)who underwent curative intent surgery. Methods: The clinical data of patients with advanced GBC who underwent curative intent surgery in 9 institutions from January 2010 to December 2015 were analyzed retrospectively.A median survival time model based on a tree augmented naïve Bayes algorithm was established by Bayesia Lab software.The survival time, number of metastatic lymph nodes(NMLN), T stage, pathological grade, margin, jaundice, liver invasion, age, sex and tumor morphology were included in this model.Confusion matrix, the receiver operating characteristic curve and area under the curve were used to evaluate the accuracy of the model.A priori statistical analysis of these 10 variables and a posterior analysis(survival time as the target variable, the remaining factors as the attribute variables)was performed.The importance rankings of each variable was calculated with the polymorphic Birnbaum importance calculation based on the posterior analysis results.The survival probability forecast table was constructed based on the top 4 prognosis factors. The survival curve was drawn by the Kaplan-Meier method, and differences in survival curves were compared using the Log-rank test. Results: A total of 316 patients were enrolled, including 109 males and 207 females.The ratio of male to female was 1.0∶1.9, the age was (62.0±10.8)years.There was 298 cases(94.3%) R0 resection and 18 cases(5.7%) R1 resection.T staging: 287 cases(90.8%) T3 and 29 cases(9.2%) T4.The median survival time(MST) was 23.77 months, and the 1, 3, 5-year survival rates were 67.4%, 40.8%, 32.0%, respectively.For the Bayesian model, the number of correctly predicted cases was 121(≤23.77 months) and 115(>23.77 months) respectively, leading to a 74.86% accuracy of this model.The prior probability of survival time was 0.503 2(≤23.77 months) and 0.496 8

  11. Study on the Development of Industry of Internet of Things Based on Competitive GEM Model in Fujian Province

    Directory of Open Access Journals (Sweden)

    Di Jun An

    2016-01-01

    Full Text Available Firstly, the basic theories of internet of things, competitive GEM model and industrial development in Fujian Province were studied in this paper. Then, the factors influencing the cultivation of industrial competitiveness of the internet of things was observed and finally the suggestions on enhancing the competitiveness of internet of things and strengthening the cultivation of talents of internet of things were put forward.

  12. Partitioning of excess mortality in population-based cancer patient survival studies using flexible parametric survival models

    Directory of Open Access Journals (Sweden)

    Eloranta Sandra

    2012-06-01

    Full Text Available Abstract Background Relative survival is commonly used for studying survival of cancer patients as it captures both the direct and indirect contribution of a cancer diagnosis on mortality by comparing the observed survival of the patients to the expected survival in a comparable cancer-free population. However, existing methods do not allow estimation of the impact of isolated conditions (e.g., excess cardiovascular mortality on the total excess mortality. For this purpose we extend flexible parametric survival models for relative survival, which use restricted cubic splines for the baseline cumulative excess hazard and for any time-dependent effects. Methods In the extended model we partition the excess mortality associated with a diagnosis of cancer through estimating a separate baseline excess hazard function for the outcomes under investigation. This is done by incorporating mutually exclusive background mortality rates, stratified by the underlying causes of death reported in the Swedish population, and by introducing cause of death as a time-dependent effect in the extended model. This approach thereby enables modeling of temporal trends in e.g., excess cardiovascular mortality and remaining cancer excess mortality simultaneously. Furthermore, we illustrate how the results from the proposed model can be used to derive crude probabilities of death due to the component parts, i.e., probabilities estimated in the presence of competing causes of death. Results The method is illustrated with examples where the total excess mortality experienced by patients diagnosed with breast cancer is partitioned into excess cardiovascular mortality and remaining cancer excess mortality. Conclusions The proposed method can be used to simultaneously study disease patterns and temporal trends for various causes of cancer-consequent deaths. Such information should be of interest for patients and clinicians as one way of improving prognosis after cancer is

  13. New Insights to Compare and Choose TKTD Models for Survival Based on an Interlaboratory Study for Lymnaea stagnalis Exposed to Cd.

    Science.gov (United States)

    Baudrot, Virgile; Preux, Sara; Ducrot, Virginie; Pave, Alain; Charles, Sandrine

    2018-02-06

    Toxicokinetic-toxicodynamic (TKTD) models, as the General Unified Threshold model of Survival (GUTS), provide a consistent process-based framework compared to classical dose-response models to analyze both time and concentration-dependent data sets. However, the extent to which GUTS models (Stochastic Death (SD) and Individual Tolerance (IT)) lead to a better fitting than classical dose-response model at a given target time (TT) has poorly been investigated. Our paper highlights that GUTS estimates are generally more conservative and have a reduced uncertainty through smaller credible intervals for the studied data sets than classical TT approaches. Also, GUTS models enable estimating any x% lethal concentration at any time (LC x,t ), and provide biological information on the internal processes occurring during the experiments. While both GUTS-SD and GUTS-IT models outcompete classical TT approaches, choosing one preferentially to the other is still challenging. Indeed, the estimates of survival rate over time and LC x,t are very close between both models, but our study also points out that the joint posterior distributions of SD model parameters are sometimes bimodal, while two parameters of the IT model seems strongly correlated. Therefore, the selection between these two models has to be supported by the experimental design and the biological objectives, and this paper provides some insights to drive this choice.

  14. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study

    Science.gov (United States)

    Eckmanns, Tim; Abu Sin, Muna; Ducomble, Tanja; Harder, Thomas; Sixtensson, Madlen; Velasco, Edward; Weiß, Bettina; Kramarz, Piotr; Monnet, Dominique L.; Kretzschmar, Mirjam E.; Suetens, Carl

    2016-01-01

    . HAP and HA primary BSI were associated with the highest burden because of their high severity. The cumulative burden of the six HAIs was higher than the total burden of all other 32 communicable diseases included in the BCoDE 2009–2013 study. The main limitations of the study are the variability in the parameter estimates, in particular the disease models’ case fatalities, and the use of the Rhame and Sudderth formula for estimating incident number of cases from prevalence data. Conclusions We estimated the EU/EEA burden of HAIs in DALYs in 2011–2012 using a transparent and evidence-based approach that allows for combining estimates of morbidity and of mortality in order to compare with other diseases and to inform a comprehensive ranking suitable for prioritization. Our results highlight the high burden of HAIs and the need for increased efforts for their prevention and control. Furthermore, our model should allow for estimations of the potential benefit of preventive measures on the burden of HAIs in the EU/EEA. PMID:27755545

  15. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model.

    Science.gov (United States)

    Reiser, Leonore; Berardini, Tanya Z; Li, Donghui; Muller, Robert; Strait, Emily M; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org. © The Author(s) 2016. Published by Oxford University Press.

  16. Exploring a model of human chemokine receptor CCR2 in presence of TAK779: A membrane based molecular dynamics study

    Science.gov (United States)

    Balupuri, Anand; Sobhia, M. Elizabeth

    2014-04-01

    Chemokine receptor 2 (CCR2) is a G-protein coupled receptor (GPCR) and a crucial target for various inflammation-driven diseases. In the present study, molecular docking and molecular dynamics simulations were performed on a CCR2 homology model. This work includes the comparative MD simulations of uncomplexed and ‘antagonist-complexed’ CCR2 models. These simulations yield insights into the binding mechanism of antagonist TAK779 and improve the understanding of various structural changes induced by the ligand in the CCR2 protein. Here, one 20 ns MD simulation was carried out on the uncomplexed CCR2 model in lipid bilayer to explore the effects of lipid membrane on the protein. Another 20 ns MD simulation was performed under the similar conditions on the docked CCR2-TAK779 complex. An alteration in the position and orientation of the ligand in binding site was observed after the simulation. Examination of protein-ligand complex suggested that TAK779 produced a greater structural change on the TM-III, TM-IV, TM-V and TM-VI than TM-I, TM-II and TM-VII. Interaction networks involving the conserved residues of uncomplexed and ‘antagonist-complexed’ CCR2 models were also examined. The major difference was observed to be the role of conserved residues of the DRY motif of TM-III and the NPxxY motif of TM-VII of CCR2.

  17. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  18. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  19. A rear-end collision risk assessment model based on drivers' collision avoidance process under influences of cell phone use and gender-A driving simulator based study.

    Science.gov (United States)

    Li, Xiaomeng; Yan, Xuedong; Wu, Jiawei; Radwan, Essam; Zhang, Yuting

    2016-12-01

    Driver's collision avoidance performance has a direct link to the collision risk and crash severity. Previous studies demonstrated that the distracted driving, such as using a cell phone while driving, disrupted the driver's performance on road. This study aimed to investigate the manner and extent to which cell phone use and driver's gender affected driving performance and collision risk in a rear-end collision avoidance process. Forty-two licensed drivers completed the driving simulation experiment in three phone use conditions: no phone use, hands-free, and hand-held, in which the drivers drove in a car-following situation with potential rear-end collision risks caused by the leading vehicle's sudden deceleration. Based on the experiment data, a rear-end collision risk assessment model was developed to assess the influence of cell phone use and driver's gender. The cell phone use and driver's gender were found to be significant factors that affected the braking performances in the rear-end collision avoidance process, including the brake reaction time, the deceleration adjusting time and the maximum deceleration rate. The minimum headway distance between the leading vehicle and the simulator during the rear-end collision avoidance process was the final output variable, which could be used to measure the rear-end collision risk and judge whether a collision occurred. The results showed that although cell phone use drivers took some compensatory behaviors in the collision avoidance process to reduce the mental workload, the collision risk in cell phone use conditions was still higher than that without the phone use. More importantly, the results proved that the hands-free condition did not eliminate the safety problem associated with distracted driving because it impaired the driving performance in the same way as much as the use of hand-held phones. In addition, the gender effect indicated that although female drivers had longer reaction time than male drivers in

  20. Observer-Based Fuel Control Using Oxygen Measurement. A study based on a first-principles model of a pulverized coal fired Benson Boiler

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, Palle; Bendtsen, Jan Dimon; Mortensen, Jan Henrik; Just Nielsen, Rene; Soendergaard Pedersen, Tom [Aalborg Univ. (Denmark). Dept. of Control Engineering

    2005-01-01

    This report describes an attempt to improve the existing control of coal mills used at the Danish power plant Nordjyllandsvaerket Unit 3. The coal mills pulverize raw coal to a fine-grained powder, which is injected into the furnace of the power plant. In the furnace the coal is combusted, producing heat, which is used for steam production. With better control of the coal mills, the power plant can be controlled more efficiently during load changes, thus improving the overall availability and efficiency of the plant. One of the main difficulties from a control point of view is that the coal mills are not equipped with sensors that detect how much coal is injected into the furnace. During the project, a fairly detailed, non-linear differential equation model of the furnace and the steam circuit was constructed and validated against data obtained at the plant. It was observed that this model was able to capture most of the important dynamics found in the data. Based on this model, it is possible to extract linearized models in various operating points. The report discusses this approach and illustrates how the model can be linearized and reduced to a lower-order linear model that is valid in the vicinity of an operating point by removing states that have little influence on the overall response. A viable adaptive control strategy would then be to design controllers for each of these simplified linear models, i.e., the control loop that sets references to the coal mills and feedwater, and use the load as a separate input to the control. The control gains should then be scheduled according to the load. However, the variations and uncertainties in the coal mill are not addressed directly in this approach. Another control approach was taken in this project, where a Kalman filter based on measurements of air flow blown into the furnace and the oxygen concentration in the flue gas is designed to estimate the actual coal flow injected into the furnace. With this estimate

  1. Investigating the experience: A case study of a science professional development program based on Kolb's experiential learning model

    Science.gov (United States)

    Davis, Brian L.

    Professional development for educators has been defined as the process or processes by which teachers achieve higher levels of professional competence and expand their understanding of self, role, context and career (Duke and Stiggins, 1990). Currently, there is limited research literature that examines the effect a professional development course, which uses David Kolb's experiential learning model, has on the professional growth and teaching practice of middle school science teachers. The purpose of this interpretive case study is to investigate how three science teachers who participated in the Rivers to Reef professional development course interpreted the learning experience and integrated the experience into their teaching practice. The questions guiding this research are (1) What is the relationship between a professional development course that uses an experiential learning model and science teaching practice? (2) How do the Rivers to Reef participants reflect on and describe the course as a professional growth experience? The creation of the professional development course and the framework for the study were established using David Kolb's (1975) experiential learning theory and the reflection process model designed by David Boud (1985). The participants in the study are three middle school science teachers from schools representing varied settings and socioeconomic levels in the southeastern United States. Data collected used the three-interview series interview format designed by Dolbere and Schuman (Seidman, 1998). Data was analyzed for the identification of common categories related to impact on science teaching practice and professional growth. The major finding of this study indicates the years of teaching experience of middle school science teachers significantly influences how they approach professional development, what and how they learn from the experience, and the ways in which the experience influences their teaching practices.

  2. Study on the Market Risk Measurement of the Style Portfolios in Stock Markets Based on EVT-t-Copula Model

    Directory of Open Access Journals (Sweden)

    Yuhong Zhou

    2013-03-01

    Full Text Available For the presence of non-normal distribution characteristics in the financial assets returns, the model of AR(1-GJR(1,1 is used to characterize the marginal distribution of the style assets in China stock market. The Copula function is introduced to analyze the dependency structure between the six style assets, combined with the marginal distributed residual sequences. And the joint return distribution of the style portfolios is simulated, combined with extreme value theory and Monte Carlo simulation method. Then the market risks (VaR and CVaR of the style portfolios in China stock markets are obtained. The results of the study show that the generalized Pareto distribution Model can well fit the non-normal distribution characteristics such as peak and fat tail in the style assets returns.

  3. Estimation of sexual behavior in the 18-to-24-years-old Iranian youth based on a crosswise model study.

    Science.gov (United States)

    Vakilian, Katayon; Mousavi, Seyed Abbas; Keramat, Afsaneh

    2014-01-13

    In many countries, negative social attitude towards sensitive issues such as sexual behavior has resulted in false and invalid data concerning this issue.This is an analytical cross-sectional study, in which a total number of 1500 single students from universities of Shahroud City were sampled using a multi stage technique. The students were assured that their information disclosed for the researcher will be treated as private and confidential. The results were analyzed using crosswise model, Crosswise Regression, T-test and Chi-square tests. It seems that the prevalence of sexual behavior among Iranian youth is 41% (CI = 36-53). Findings showed that estimation sexual relationship in Iranian single youth is high. Thus, devising training models according to the Islamic-Iranian culture is necessary in order to prevent risky sexual behavior.

  4. Appropriate drinking water treatment processes for organic micropollutants removal based on experimental and model studies - A multi-criteria analysis study

    KAUST Repository

    Sudhakaran, Sairam

    2013-01-01

    The presence of organic micropollutants (OMPs), pharmaceuticals and personal care products (PPCPs) in potable water is of great environmental and public health concern. OMPs are included in the priority list of contaminants in United States EPA and European framework directives. Advanced treatment processes such as reverse osmosis, nanofiltration, ozonation and adsorption are the usual industry-recommended processes for OMPs removal, however, natural systems, e.g., riverbank filtration and constructed wetlands, are also potentially efficient options for OMPs removal. In this study, a decision support system (DSS) based on multi-criteria analysis (MCA) was created to compare processes for OMPs removal under various criteria. Multi-criteria analysis (MCA), a transparent and reliable procedure, was adopted. Models were built for both experimental and predicted percent-removals for a range of OMPs reflecting different physicochemical properties. The experimental percent-removals for several processes (riverbank filtration (RBF), ozonation, advanced oxidation, adsorption, reverse osmosis, and nanofiltration) were considered. The predicted percent-removals were taken from validated quantitative structure activity relationship (QSAR) models. Analytical methods to detect OMPs in water are very laborious, thus a modeling approach such as QSAR is an attractive option.A survey among two groups of participants including academics (PhD students and post-doctoral research associates) and industry (managers and operators) representatives was conducted to assign weights for the following criteria: treatability, costs, technical considerations, sustainability and time. The process rankings varied depending on the contaminant species and personal preferences (weights). The results indicated that RBF and oxidation were preferable over adsorption and membranes processes. The results also suggest that the use of a hybrid treatment process, e.g., combining a natural system with an

  5. District Allocation of Human Resources Utilizing the Evidence Based Model: A Study of One High Achieving School District in Southern California

    Science.gov (United States)

    Lane, Amber Marie

    2013-01-01

    This study applies the Gap Analysis Framework to understand the gaps that exist in human resource allocation of one Southern California school district. Once identified, gaps are closed with the reallocation of human resources, according to the Evidenced Based Model, requiring the re-purposing of core classroom teachers, specialists, special…

  6. Error Detection-Based Model to Assess Educational Outcomes in Crisis Resource Management Training: A Pilot Study.

    Science.gov (United States)

    Bouhabel, Sarah; Kay-Rivest, Emily; Nhan, Carol; Bank, Ilana; Nugus, Peter; Fisher, Rachel; Nguyen, Lily Hp

    2017-06-01

    Otolaryngology-head and neck surgery (OTL-HNS) residents face a variety of difficult, high-stress situations, which may occur early in their training. Since these events occur infrequently, simulation-based learning has become an important part of residents' training and is already well established in fields such as anesthesia and emergency medicine. In the domain of OTL-HNS, it is gradually gaining in popularity. Crisis Resource Management (CRM), a program adapted from the aviation industry, aims to improve outcomes of crisis situations by attempting to mitigate human errors. Some examples of CRM principles include cultivating situational awareness; promoting proper use of available resources; and improving rapid decision making, particularly in high-acuity, low-frequency clinical situations. Our pilot project sought to integrate CRM principles into an airway simulation course for OTL-HNS residents, but most important, it evaluated whether learning objectives were met, through use of a novel error identification model.

  7. [Study on HIV prevention related knowledge-motivation-psychological model in men who have sex with men, based on a structural equation model].

    Science.gov (United States)

    Jiang, Y; Dou, Y L; Cai, A J; Zhang, Z; Tian, T; Dai, J H; Huang, A L

    2016-02-01

    Knowledge-motivation-psychological model was set up and tested through structural equation model to provide evidence on HIV prevention related strategy in Men who have Sex with Men (MSM). Snowball sampling method was used to recruit a total of 550 MSM volunteers from two MSM Non-Governmental Organizations in Urumqi, Xinjiang province. HIV prevention related information on MSM was collected through a questionnaire survey. A total of 477 volunteers showed with complete information. HIV prevention related Knowledge-motivation-psychological model was built under related experience and literature. Relations between knowledge, motivation and psychological was studied, using a ' structural equation model' with data from the fitting questionnaires and modification of the model. Structural equation model presented good fitting results. After revising the fitting index: RMSEA was 0.035, NFI was 0.965 and RFI was 0.920. Thereafter the exogenous latent variables would include knowledge, motivation and psychological effects. The endogenous latent variable appeared as prevention related behaviors. The standardized total effects of motivation, knowledge, psychological on prevention behavior were 0.44, 0.41 and 0.17 respectively. Correlation coefficient of motivation and psychological effects was 0.16. Correlation coefficient on knowledge and psychological effects was -0.17 (Pmotivation did not show statistical significance. Knowledge of HIV and motivation of HIV prevention did not show any accordance in MSM population. It was necessary to increase the awareness and to improve the motivation of HIV prevention in MSM population.

  8. Simulation study of a magnetocardiogram based on a virtual heart model: effect of a cardiac equivalent source and a volume conductor

    International Nuclear Information System (INIS)

    Shou Guo-Fa; Xia Ling; Dai Ling; Ma Ping; Tang Fa-Kuan

    2011-01-01

    In this paper, we present a magnetocardiogram (MCG) simulation study using the boundary element method (BEM) and based on the virtual heart model and the realistic human volume conductor model. The different contributions of cardiac equivalent source models and volume conductor models to the MCG are deeply and comprehensively investigated. The single dipole source model, the multiple dipoles source model and the equivalent double layer (EDL) source model are analysed and compared with the cardiac equivalent source models. Meanwhile, the effect of the volume conductor model on the MCG combined with these cardiac equivalent sources is investigated. The simulation results demonstrate that the cardiac electrophysiological information will be partly missed when only the single dipole source is taken, while the EDL source is a good option for MCG simulation and the effect of the volume conductor is smallest for the EDL source. Therefore, the EDL source is suitable for the study of MCG forward and inverse problems, and more attention should be paid to it in future MCG studies. (general)

  9. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-